Compare commits

..

2 Commits

Author SHA1 Message Date
Fabiano Rosas
71b4afac84 tests/qtest: Pass migration tests individually to meson
Having glib tests (qtest) all defined in a single test file makes it
hard to know which test has failed when running CI. Create a new
'migration' test suite and move the migration tests individually to
meson.

For now, use the global migration-test timeout value, but we could set
a per-subtest timeout in the future.

Sample output:

$ ../configure --target-list=x86_64-softmmu,aarch64-softmmu ...
$ make check-migration
...
36/469 qemu:migration / /x86_64/migration/precopy/unix/plain                       OK    34.25s   1 subtests passed
37/469 qemu:migration / /x86_64/migration/multifd/tcp/tls/x509/default-host        OK     7.33s   1 subtests passed
39/469 qemu:migration / /x86_64/migration/multifd/tcp/tls/x509/allow-anon-client   OK     7.32s   1 subtests passed
40/469 qemu:migration / /aarch64/migration/postcopy/compress/plain                 SKIP   0.04s
41/469 qemu:migration / /aarch64/migration/postcopy/recovery/compress/plain        SKIP   0.04s
42/469 qemu:migration / /aarch64/migration/bad_dest                                OK     0.65s   1 subtests passed

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-30 11:33:21 -03:00
Fabiano Rosas
36466cab88 tests/qtest: Add a script to gather migration tests list
This is a workaround for limitations with meson.

We'd like to have each migration (sub)test registered with meson's
test() function, but since they are all inside the single
migration-test.c there's no way for meson to see the tests. We need an
external way to generate the list of tests and pass it to meson.

We cannot call 'migration-test -l' because we'd need to build the
migration-test binary first and while that is possible, there's no
subsequent way to get the generated list into a meson variable for
consumption. None of meson's routines support retrieving an arbitrary
list of strings from a command at build (vs. configure) time.

We also cannot use generators to have 'migration-test' write to a file
because that would happen at build time and meson does not support
reading from a file in the build directory.

So the only approach left is to either have the list of tests
committed into the source code or to grep the source code for the
list. Committing the file would be harder to maintain and the test
registration in migration-test.c is pretty static, so this patch uses
the grep approach. But using python because it is more portable.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-06-30 11:32:54 -03:00
965 changed files with 10247 additions and 30173 deletions

View File

@@ -25,7 +25,6 @@
# rebuilding all the object files we skip in the artifacts
.native_build_artifact_template:
artifacts:
when: on_success
expire_in: 2 days
paths:
- build
@@ -54,7 +53,6 @@
extends: .common_test_job_template
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt
@@ -70,7 +68,7 @@
policy: pull-push
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
when: on_failure
expire_in: 7 days
paths:
- build/tests/results/latest/results.xml

View File

@@ -454,7 +454,7 @@ gcov:
IMAGE: ubuntu2204
CONFIGURE_ARGS: --enable-gcov
TARGETS: aarch64-softmmu ppc64-softmmu s390x-softmmu x86_64-softmmu
MAKE_CHECK_ARGS: check-unit check-softfloat
MAKE_CHECK_ARGS: check
after_script:
- cd build
- gcovr --xml-pretty --exclude-unreachable-branches --print-summary
@@ -462,12 +462,8 @@ gcov:
coverage: /^\s*lines:\s*\d+.\d+\%/
artifacts:
name: ${CI_JOB_NAME}-${CI_COMMIT_REF_NAME}-${CI_COMMIT_SHA}
when: always
expire_in: 2 days
paths:
- build/meson-logs/testlog.txt
reports:
junit: build/meson-logs/testlog.junit.xml
coverage_report:
coverage_format: cobertura
path: build/coverage.xml
@@ -591,7 +587,6 @@ pages:
- make -C build install DESTDIR=$(pwd)/temp-install
- mv temp-install/usr/local/share/doc/qemu/* public/
artifacts:
when: on_success
paths:
- public
variables:

View File

@@ -15,7 +15,7 @@ env:
folder: $HOME/.cache/qemu-vm
install_script:
- dnf update -y
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget meson
- dnf install -y git make openssh-clients qemu-img qemu-system-x86 wget
clone_script:
- git clone --depth 100 "$CI_REPOSITORY_URL" .
- git fetch origin "$CI_COMMIT_REF_NAME"

View File

@@ -55,7 +55,6 @@
.cross_test_artifacts:
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: always
expire_in: 7 days
paths:
- build/meson-logs/testlog.txt

View File

@@ -57,7 +57,7 @@ cross-i386-tci:
variables:
IMAGE: fedora-i386-cross
ACCEL: tcg-interpreter
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user --disable-plugins
EXTRA_CONFIGURE_OPTS: --target-list=i386-softmmu,i386-linux-user,aarch64-softmmu,aarch64-linux-user,ppc-softmmu,ppc-linux-user
MAKE_CHECK_ARGS: check check-tcg
cross-mipsel-system:
@@ -169,7 +169,6 @@ cross-win32-system:
CROSS_SKIP_TARGETS: alpha-softmmu avr-softmmu hppa-softmmu m68k-softmmu
microblazeel-softmmu mips64el-softmmu nios2-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe
@@ -185,7 +184,6 @@ cross-win64-system:
or1k-softmmu rx-softmmu sh4eb-softmmu sparc64-softmmu
tricore-softmmu xtensaeb-softmmu
artifacts:
when: on_success
paths:
- build/qemu-setup*.exe

View File

@@ -63,7 +63,6 @@ build-opensbi:
stage: build
needs: ['docker-opensbi']
artifacts:
when: on_success
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/opensbi-riscv32-generic-fw_dynamic.bin
- pc-bios/opensbi-riscv64-generic-fw_dynamic.bin

View File

@@ -7,15 +7,10 @@
cache:
key: "${CI_JOB_NAME}-cache"
paths:
- msys64/var/cache
when: always
- ${CI_PROJECT_DIR}/msys64/var/cache
needs: []
stage: build
timeout: 80m
variables:
# This feature doesn't (currently) work with PowerShell, it stops
# the echo'ing of commands being run and doesn't show any timing
FF_SCRIPT_SECTIONS: 0
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
expire_in: 7 days
@@ -24,40 +19,14 @@
reports:
junit: "build/meson-logs/testlog.junit.xml"
before_script:
- Write-Output "Acquiring msys2.exe installer at $(Get-Date -Format u)"
- If ( !(Test-Path -Path msys64\var\cache ) ) {
mkdir msys64\var\cache
}
- Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe.sig"
-outfile "msys2.exe.sig"
- if ( Test-Path -Path msys64\var\cache\msys2.exe.sig ) {
Write-Output "Cached installer sig" ;
if ( ((Get-FileHash msys2.exe.sig).Hash -ne (Get-FileHash msys64\var\cache\msys2.exe.sig).Hash) ) {
Write-Output "Mis-matched installer sig, new installer download required" ;
Remove-Item -Path msys64\var\cache\msys2.exe.sig ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
} else {
Write-Output "Matched installer sig, cached installer still valid"
}
} else {
Write-Output "No cached installer sig, new installer download required" ;
if ( Test-Path -Path msys64\var\cache\msys2.exe ) {
Remove-Item -Path msys64\var\cache\msys2.exe
}
}
- if ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Write-Output "Fetching latest installer" ;
- If ( !(Test-Path -Path msys64\var\cache\msys2.exe ) ) {
Invoke-WebRequest
"https://repo.msys2.org/distrib/msys2-x86_64-latest.sfx.exe"
-outfile "msys64\var\cache\msys2.exe" ;
Copy-Item -Path msys2.exe.sig -Destination msys64\var\cache\msys2.exe.sig
} else {
Write-Output "Using cached installer"
"https://github.com/msys2/msys2-installer/releases/download/2022-06-03/msys2-base-x86_64-20220603.sfx.exe"
-outfile "msys64\var\cache\msys2.exe"
}
- Write-Output "Invoking msys2.exe installer at $(Get-Date -Format u)"
- msys64\var\cache\msys2.exe -y
- ((Get-Content -path .\msys64\etc\\post-install\\07-pacman-key.post -Raw)
-replace '--refresh-keys', '--version') |
@@ -66,66 +35,97 @@
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Core update
- .\msys64\usr\bin\bash -lc 'pacman --noconfirm -Syuu' # Normal update
- taskkill /F /FI "MODULES eq msys-2.0.dll"
script:
- Write-Output "Installing mingw packages at $(Get-Date -Format u)"
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
$MINGW_TARGET-capstone
$MINGW_TARGET-curl
$MINGW_TARGET-cyrus-sasl
$MINGW_TARGET-dtc
$MINGW_TARGET-gcc
$MINGW_TARGET-glib2
$MINGW_TARGET-gnutls
$MINGW_TARGET-gtk3
$MINGW_TARGET-libgcrypt
$MINGW_TARGET-libjpeg-turbo
$MINGW_TARGET-libnfs
$MINGW_TARGET-libpng
$MINGW_TARGET-libssh
$MINGW_TARGET-libtasn1
$MINGW_TARGET-libusb
$MINGW_TARGET-lzo2
$MINGW_TARGET-nettle
$MINGW_TARGET-ninja
$MINGW_TARGET-pixman
$MINGW_TARGET-pkgconf
$MINGW_TARGET-python
$MINGW_TARGET-SDL2
$MINGW_TARGET-SDL2_image
$MINGW_TARGET-snappy
$MINGW_TARGET-spice
$MINGW_TARGET-usbredir
$MINGW_TARGET-zstd "
- Write-Output "Running build at $(Get-Date -Format u)"
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc "../configure --enable-fdt=system $CONFIGURE_ARGS"
- ..\msys64\usr\bin\bash -lc "make"
- ..\msys64\usr\bin\bash -lc "make check MTESTARGS='$TEST_ARGS' || { cat meson-logs/testlog.txt; exit 1; } ;"
- Write-Output "Finished build at $(Get-Date -Format u)"
msys2-64bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-x86_64
MSYSTEM: MINGW64
# do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
CONFIGURE_ARGS: --target-list=x86_64-softmmu --without-default-devices -Ddebug=false -Doptimization=0
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
TEST_ARGS: --no-suite qtest
script:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-x86_64-capstone
mingw-w64-x86_64-curl
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-dtc
mingw-w64-x86_64-gcc
mingw-w64-x86_64-glib2
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-libnfs
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-libusb
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-nettle
mingw-w64-x86_64-ninja
mingw-w64-x86_64-pixman
mingw-w64-x86_64-pkgconf
mingw-w64-x86_64-python
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW64' # Start a 64-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir build
- cd build
# Note: do not remove "--without-default-devices"!
# commit 9f8e6cad65a6 ("gitlab-ci: Speed up the msys2-64bit job by using --without-default-devices"
# changed to compile QEMU with the --without-default-devices switch
# for the msys2 64-bit job, due to the build could not complete within
# the project timeout.
- ..\msys64\usr\bin\bash -lc '../configure --target-list=x86_64-softmmu
--without-default-devices --enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
# qTests don't run successfully with "--without-default-devices",
# so let's exclude the qtests from CI for now.
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" || { cat meson-logs/testlog.txt; exit 1; } ;'
msys2-32bit:
extends: .shared_msys2_builder
variables:
MINGW_TARGET: mingw-w64-i686
MSYSTEM: MINGW32
CONFIGURE_ARGS: --target-list=ppc64-softmmu -Ddebug=false -Doptimization=0
TEST_ARGS: --no-suite qtest
script:
- .\msys64\usr\bin\bash -lc "pacman -Sy --noconfirm --needed
bison diffutils flex
git grep make sed
mingw-w64-i686-capstone
mingw-w64-i686-curl
mingw-w64-i686-cyrus-sasl
mingw-w64-i686-dtc
mingw-w64-i686-gcc
mingw-w64-i686-glib2
mingw-w64-i686-gnutls
mingw-w64-i686-gtk3
mingw-w64-i686-libgcrypt
mingw-w64-i686-libjpeg-turbo
mingw-w64-i686-libnfs
mingw-w64-i686-libpng
mingw-w64-i686-libssh
mingw-w64-i686-libtasn1
mingw-w64-i686-libusb
mingw-w64-i686-lzo2
mingw-w64-i686-nettle
mingw-w64-i686-ninja
mingw-w64-i686-pixman
mingw-w64-i686-pkgconf
mingw-w64-i686-python
mingw-w64-i686-SDL2
mingw-w64-i686-SDL2_image
mingw-w64-i686-snappy
mingw-w64-i686-spice
mingw-w64-i686-usbredir
mingw-w64-i686-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
- $env:MSYSTEM = 'MINGW32' # Start a 32-bit MinGW environment
- $env:MSYS = 'winsymlinks:native' # Enable native Windows symlink
- mkdir build
- cd build
- ..\msys64\usr\bin\bash -lc '../configure --target-list=ppc64-softmmu
--enable-fdt=system'
- ..\msys64\usr\bin\bash -lc 'make'
- ..\msys64\usr\bin\bash -lc 'make check MTESTARGS=\"--no-suite qtest\" ||
{ cat meson-logs/testlog.txt; exit 1; }'

View File

@@ -76,10 +76,9 @@ Paul Burton <paulburton@kernel.org> <pburton@wavecomp.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <f4bug@amsat.org>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@redhat.com>
Philippe Mathieu-Daudé <philmd@linaro.org> <philmd@fungible.com>
Roman Bolshakov <rbolshakov@ddn.com> <r.bolshakov@yadro.com>
Stefan Brankovic <stefan.brankovic@syrmia.com> <stefan.brankovic@rt-rk.com.com>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
Yongbok Kim <yongbok.kim@mips.com> <yongbok.kim@imgtec.com>
Taylor Simpson <ltaylorsimpson@gmail.com> <tsimpson@quicinc.com>
# Also list preferred name forms where people have changed their
# git author config, or had utf8/latin1 encoding issues.

View File

@@ -452,6 +452,8 @@ S: Supported
F: target/s390x/kvm/
F: target/s390x/machine.c
F: target/s390x/sigp.c
F: hw/s390x/pv.c
F: include/hw/s390x/pv.h
F: gdb-xml/s390*.xml
T: git https://github.com/borntraeger/qemu.git s390-next
L: qemu-s390x@nongnu.org
@@ -496,14 +498,14 @@ F: target/arm/hvf/
X86 HVF CPUs
M: Cameron Esfahani <dirty@apple.com>
M: Roman Bolshakov <rbolshakov@ddn.com>
M: Roman Bolshakov <r.bolshakov@yadro.com>
W: https://wiki.qemu.org/Features/HVF
S: Maintained
F: target/i386/hvf/
HVF
M: Cameron Esfahani <dirty@apple.com>
M: Roman Bolshakov <rbolshakov@ddn.com>
M: Roman Bolshakov <r.bolshakov@yadro.com>
W: https://wiki.qemu.org/Features/HVF
S: Maintained
F: accel/hvf/
@@ -2049,7 +2051,7 @@ F: hw/usb/dev-serial.c
VFIO
M: Alex Williamson <alex.williamson@redhat.com>
M: Cédric Le Goater <clg@redhat.com>
R: Cédric Le Goater <clg@redhat.com>
S: Supported
F: hw/vfio/*
F: include/hw/vfio/
@@ -2118,24 +2120,17 @@ F: include/sysemu/balloon.h
virtio-9p
M: Greg Kurz <groug@kaod.org>
M: Christian Schoenebeck <qemu_oss@crudebyte.com>
S: Maintained
S: Odd Fixes
W: https://wiki.qemu.org/Documentation/9p
F: hw/9pfs/
X: hw/9pfs/xen-9p*
X: hw/9pfs/9p-proxy*
F: fsdev/
X: fsdev/virtfs-proxy-helper.c
F: docs/tools/virtfs-proxy-helper.rst
F: tests/qtest/virtio-9p-test.c
F: tests/qtest/libqos/virtio-9p*
T: git https://gitlab.com/gkurz/qemu.git 9p-next
T: git https://github.com/cschoenebeck/qemu.git 9p.next
virtio-9p-proxy
F: hw/9pfs/9p-proxy*
F: fsdev/virtfs-proxy-helper.c
F: docs/tools/virtfs-proxy-helper.rst
S: Obsolete
virtio-blk
M: Stefan Hajnoczi <stefanha@redhat.com>
L: qemu-block@nongnu.org
@@ -2215,13 +2210,6 @@ F: hw/virtio/vhost-user-gpio*
F: include/hw/virtio/vhost-user-gpio.h
F: tests/qtest/libqos/virtio-gpio.*
vhost-user-scmi
R: mzamazal@redhat.com
S: Supported
F: hw/virtio/vhost-user-scmi*
F: include/hw/virtio/vhost-user-scmi.h
F: tests/qtest/libqos/virtio-scmi.*
virtio-crypto
M: Gonglei <arei.gonglei@huawei.com>
S: Supported
@@ -2229,13 +2217,6 @@ F: hw/virtio/virtio-crypto.c
F: hw/virtio/virtio-crypto-pci.c
F: include/hw/virtio/virtio-crypto.h
virtio based memory device
M: David Hildenbrand <david@redhat.com>
S: Supported
F: hw/virtio/virtio-md-pci.c
F: include/hw/virtio/virtio-md-pci.h
F: stubs/virtio-md-pci.c
virtio-mem
M: David Hildenbrand <david@redhat.com>
S: Supported
@@ -3125,7 +3106,6 @@ R: Qiuhao Li <Qiuhao.Li@outlook.com>
S: Maintained
F: tests/qtest/fuzz/
F: tests/qtest/fuzz-*test.c
F: tests/docker/test-fuzz
F: scripts/oss-fuzz/
F: hw/mem/sparse-mem.c
F: docs/devel/fuzzing.rst
@@ -3209,15 +3189,6 @@ F: qapi/migration.json
F: tests/migration/
F: util/userfaultfd.c
Migration dirty limit and dirty page rate
M: Hyman Huang <yong.huang@smartx.com>
S: Maintained
F: softmmu/dirtylimit.c
F: include/sysemu/dirtylimit.h
F: migration/dirtyrate.c
F: migration/dirtyrate.h
F: include/sysemu/dirtyrate.h
D-Bus
M: Marc-André Lureau <marcandre.lureau@redhat.com>
S: Maintained
@@ -3231,7 +3202,6 @@ F: docs/interop/dbus*
F: docs/sphinx/dbus*
F: docs/sphinx/fakedbusdoc.py
F: tests/qtest/dbus*
F: scripts/xml-preprocess*
Seccomp
M: Daniel P. Berrange <berrange@redhat.com>
@@ -3245,7 +3215,6 @@ M: Daniel P. Berrange <berrange@redhat.com>
S: Maintained
F: crypto/
F: include/crypto/
F: host/include/*/host/crypto/
F: qapi/crypto.json
F: tests/unit/test-crypto-*
F: tests/bench/benchmark-crypto-*

View File

@@ -28,7 +28,7 @@ quiet-command = $(quiet-@)$(call quiet-command-run,$1,$2,$3)
UNCHECKED_GOALS := TAGS gtags cscope ctags dist \
help check-help print-% \
docker docker-% lcitool-refresh vm-help vm-test vm-build-%
docker docker-% vm-help vm-test vm-build-%
all:
.PHONY: all clean distclean recurse-all dist msi FORCE

View File

@@ -1 +1 @@
8.1.2
8.0.50

View File

@@ -304,7 +304,7 @@ static void hvf_region_del(MemoryListener *listener,
static MemoryListener hvf_memory_listener = {
.name = "hvf",
.priority = MEMORY_LISTENER_PRIORITY_ACCEL,
.priority = 10,
.region_add = hvf_region_add,
.region_del = hvf_region_del,
.log_start = hvf_log_start,
@@ -372,19 +372,19 @@ type_init(hvf_type_init);
static void hvf_vcpu_destroy(CPUState *cpu)
{
hv_return_t ret = hv_vcpu_destroy(cpu->accel->fd);
hv_return_t ret = hv_vcpu_destroy(cpu->hvf->fd);
assert_hvf_ok(ret);
hvf_arch_vcpu_destroy(cpu);
g_free(cpu->accel);
cpu->accel = NULL;
g_free(cpu->hvf);
cpu->hvf = NULL;
}
static int hvf_init_vcpu(CPUState *cpu)
{
int r;
cpu->accel = g_new0(AccelCPUState, 1);
cpu->hvf = g_malloc0(sizeof(*cpu->hvf));
/* init cpu signals */
struct sigaction sigact;
@@ -393,19 +393,18 @@ static int hvf_init_vcpu(CPUState *cpu)
sigact.sa_handler = dummy_signal;
sigaction(SIG_IPI, &sigact, NULL);
pthread_sigmask(SIG_BLOCK, NULL, &cpu->accel->unblock_ipi_mask);
sigdelset(&cpu->accel->unblock_ipi_mask, SIG_IPI);
pthread_sigmask(SIG_BLOCK, NULL, &cpu->hvf->unblock_ipi_mask);
sigdelset(&cpu->hvf->unblock_ipi_mask, SIG_IPI);
#ifdef __aarch64__
r = hv_vcpu_create(&cpu->accel->fd,
(hv_vcpu_exit_t **)&cpu->accel->exit, NULL);
r = hv_vcpu_create(&cpu->hvf->fd, (hv_vcpu_exit_t **)&cpu->hvf->exit, NULL);
#else
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->accel->fd, HV_VCPU_DEFAULT);
r = hv_vcpu_create((hv_vcpuid_t *)&cpu->hvf->fd, HV_VCPU_DEFAULT);
#endif
cpu->vcpu_dirty = 1;
assert_hvf_ok(r);
cpu->accel->guest_debug_enabled = false;
cpu->hvf->guest_debug_enabled = false;
return hvf_arch_init_vcpu(cpu);
}

View File

@@ -1105,7 +1105,6 @@ static MemoryListener kvm_coalesced_pio_listener = {
.name = "kvm-coalesced-pio",
.coalesced_io_add = kvm_coalesce_pio_add,
.coalesced_io_del = kvm_coalesce_pio_del,
.priority = MEMORY_LISTENER_PRIORITY_MIN,
};
int kvm_check_extension(KVMState *s, unsigned int extension)
@@ -1778,7 +1777,7 @@ void kvm_memory_listener_register(KVMState *s, KVMMemoryListener *kml,
kml->listener.commit = kvm_region_commit;
kml->listener.log_start = kvm_log_start;
kml->listener.log_stop = kvm_log_stop;
kml->listener.priority = MEMORY_LISTENER_PRIORITY_ACCEL;
kml->listener.priority = 10;
kml->listener.name = name;
if (s->kvm_dirty_ring_size) {
@@ -1803,7 +1802,7 @@ static MemoryListener kvm_io_listener = {
.name = "kvm-io",
.eventfd_add = kvm_io_ioeventfd_add,
.eventfd_del = kvm_io_ioeventfd_del,
.priority = MEMORY_LISTENER_PRIORITY_DEV_BACKEND,
.priority = 10,
};
int kvm_set_irq(KVMState *s, int irq, int level)
@@ -2458,7 +2457,7 @@ static int kvm_init(MachineState *ms)
KVMState *s;
const KVMCapabilityInfo *missing_cap;
int ret;
int type;
int type = 0;
uint64_t dirty_log_manual_caps;
qemu_mutex_init(&kml_slots_lock);
@@ -2523,8 +2522,6 @@ static int kvm_init(MachineState *ms)
type = mc->kvm_type(ms, kvm_type);
} else if (mc->kvm_type) {
type = mc->kvm_type(ms, NULL);
} else {
type = kvm_arch_get_default_type(ms);
}
do {
@@ -2814,7 +2811,7 @@ void kvm_flush_coalesced_mmio_buffer(void)
{
KVMState *s = kvm_state;
if (!s || s->coalesced_flush_in_progress) {
if (s->coalesced_flush_in_progress) {
return;
}

View File

@@ -27,7 +27,6 @@ bool kvm_allowed;
bool kvm_readonly_mem_allowed;
bool kvm_ioeventfd_any_length_allowed;
bool kvm_msi_use_devid;
bool kvm_direct_msi_allowed;
void kvm_flush_coalesced_mmio_buffer(void)
{

View File

@@ -41,7 +41,7 @@ CMPXCHG_HELPER(cmpxchgq_be, uint64_t)
CMPXCHG_HELPER(cmpxchgq_le, uint64_t)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
CMPXCHG_HELPER(cmpxchgo_be, Int128)
CMPXCHG_HELPER(cmpxchgo_le, Int128)
#endif

View File

@@ -33,6 +33,36 @@ void cpu_loop_exit_noexc(CPUState *cpu)
cpu_loop_exit(cpu);
}
#if defined(CONFIG_SOFTMMU)
void cpu_reloading_memory_map(void)
{
if (qemu_in_vcpu_thread() && current_cpu->running) {
/* The guest can in theory prolong the RCU critical section as long
* as it feels like. The major problem with this is that because it
* can do multiple reconfigurations of the memory map within the
* critical section, we could potentially accumulate an unbounded
* collection of memory data structures awaiting reclamation.
*
* Because the only thing we're currently protecting with RCU is the
* memory data structures, it's sufficient to break the critical section
* in this callback, which we know will get called every time the
* memory map is rearranged.
*
* (If we add anything else in the system that uses RCU to protect
* its data structures, we will need to implement some other mechanism
* to force TCG CPUs to exit the critical section, at which point this
* part of this callback might become unnecessary.)
*
* This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which
* only protects cpu->as->dispatch. Since we know our caller is about
* to reload it, it's safe to split the critical section.
*/
rcu_read_unlock();
rcu_read_lock();
}
}
#endif
void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */

View File

@@ -298,7 +298,7 @@ static void log_cpu_exec(vaddr pc, CPUState *cpu,
if (qemu_log_in_addr_range(pc)) {
qemu_log_mask(CPU_LOG_EXEC,
"Trace %d: %p [%08" PRIx64
"/%016" VADDR_PRIx "/%08x/%08x] %s\n",
"/%" VADDR_PRIx "/%08x/%08x] %s\n",
cpu->cpu_index, tb->tc.ptr, tb->cs_base, pc,
tb->flags, tb->cflags, lookup_symbol(pc));
@@ -487,7 +487,7 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, last_tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("Stopped execution of TB chain before %p [%016"
qemu_log("Stopped execution of TB chain before %p [%"
VADDR_PRIx "] %s\n",
last_tb->tc.ptr, pc, lookup_symbol(pc));
}
@@ -526,43 +526,6 @@ static void cpu_exec_exit(CPUState *cpu)
}
}
static void cpu_exec_longjmp_cleanup(CPUState *cpu)
{
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#else
/*
* For softmmu, a tlb_fill fault during translation will land here,
* and we need to release any page locks held. In system mode we
* have one tcg_ctx per thread, so we know it was this cpu doing
* the translation.
*
* Alternative 1: Install a cleanup to be called via an exception
* handling safe longjmp. It seems plausible that all our hosts
* support such a thing. We'd have to properly register unwind info
* for the JIT for EH, rather that just for GDB.
*
* Alternative 2: Set and restore cpu->jmp_env in tb_gen_code to
* capture the cpu_loop_exit longjmp, perform the cleanup, and
* jump again to arrive here.
*/
if (tcg_ctx->gen_tb) {
tb_unlock_pages(tcg_ctx->gen_tb);
tcg_ctx->gen_tb = NULL;
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
void cpu_exec_step_atomic(CPUState *cpu)
{
CPUArchState *env = cpu->env_ptr;
@@ -605,7 +568,16 @@ void cpu_exec_step_atomic(CPUState *cpu)
cpu_tb_exec(cpu, tb, &tb_exit);
cpu_exec_exit(cpu);
} else {
cpu_exec_longjmp_cleanup(cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
/*
@@ -720,7 +692,7 @@ static inline bool cpu_handle_exception(CPUState *cpu, int *ret)
&& cpu_neg(cpu)->icount_decr.u16.low + cpu->icount_extra == 0) {
/* Execute just one insn to trigger exception pending in the log */
cpu->cflags_next_tb = (curr_cflags(cpu) & ~CF_USE_ICOUNT)
| CF_LAST_IO | CF_NOIRQ | 1;
| CF_NOIRQ | 1;
}
#endif
return false;
@@ -1051,7 +1023,20 @@ static int cpu_exec_setjmp(CPUState *cpu, SyncClocks *sc)
{
/* Prepare setjmp context for exception handling. */
if (unlikely(sigsetjmp(cpu->jmp_env, 0) != 0)) {
cpu_exec_longjmp_cleanup(cpu);
/* Non-buggy compilers preserve this; assert the correct value. */
g_assert(cpu == current_cpu);
#ifdef CONFIG_USER_ONLY
clear_helper_retaddr();
if (have_mmap_lock()) {
mmap_unlock();
}
#endif
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
}
return cpu_exec_loop(cpu, sc);

View File

@@ -497,8 +497,8 @@ static void tlb_flush_page_locked(CPUArchState *env, int midx, vaddr page)
/* Check if we need to flush due to large pages. */
if ((page & lp_mask) == lp_addr) {
tlb_debug("forcing full flush midx %d (%016"
VADDR_PRIx "/%016" VADDR_PRIx ")\n",
tlb_debug("forcing full flush midx %d (%"
VADDR_PRIx "/%" VADDR_PRIx ")\n",
midx, lp_addr, lp_mask);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
} else {
@@ -527,7 +527,7 @@ static void tlb_flush_page_by_mmuidx_async_0(CPUState *cpu,
assert_cpu_is_self(cpu);
tlb_debug("page addr: %016" VADDR_PRIx " mmu_map:0x%x\n", addr, idxmap);
tlb_debug("page addr: %" VADDR_PRIx " mmu_map:0x%x\n", addr, idxmap);
qemu_spin_lock(&env_tlb(env)->c.lock);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
@@ -591,7 +591,7 @@ static void tlb_flush_page_by_mmuidx_async_2(CPUState *cpu,
void tlb_flush_page_by_mmuidx(CPUState *cpu, vaddr addr, uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap);
tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%" PRIx16 "\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -625,7 +625,7 @@ void tlb_flush_page(CPUState *cpu, vaddr addr)
void tlb_flush_page_by_mmuidx_all_cpus(CPUState *src_cpu, vaddr addr,
uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -666,7 +666,7 @@ void tlb_flush_page_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
vaddr addr,
uint16_t idxmap)
{
tlb_debug("addr: %016" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
tlb_debug("addr: %" VADDR_PRIx " mmu_idx:%"PRIx16"\n", addr, idxmap);
/* This should already be page aligned */
addr &= TARGET_PAGE_MASK;
@@ -728,7 +728,7 @@ static void tlb_flush_range_locked(CPUArchState *env, int midx,
*/
if (mask < f->mask || len > f->mask) {
tlb_debug("forcing full flush midx %d ("
"%016" VADDR_PRIx "/%016" VADDR_PRIx "+%016" VADDR_PRIx ")\n",
"%" VADDR_PRIx "/%" VADDR_PRIx "+%" VADDR_PRIx ")\n",
midx, addr, mask, len);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
return;
@@ -741,7 +741,7 @@ static void tlb_flush_range_locked(CPUArchState *env, int midx,
*/
if (((addr + len - 1) & d->large_page_mask) == d->large_page_addr) {
tlb_debug("forcing full flush midx %d ("
"%016" VADDR_PRIx "/%016" VADDR_PRIx ")\n",
"%" VADDR_PRIx "/%" VADDR_PRIx ")\n",
midx, d->large_page_addr, d->large_page_mask);
tlb_flush_one_mmuidx_locked(env, midx, get_clock_realtime());
return;
@@ -773,7 +773,7 @@ static void tlb_flush_range_by_mmuidx_async_0(CPUState *cpu,
assert_cpu_is_self(cpu);
tlb_debug("range: %016" VADDR_PRIx "/%u+%016" VADDR_PRIx " mmu_map:0x%x\n",
tlb_debug("range: %" VADDR_PRIx "/%u+%" VADDR_PRIx " mmu_map:0x%x\n",
d.addr, d.bits, d.len, d.idxmap);
qemu_spin_lock(&env_tlb(env)->c.lock);
@@ -1165,7 +1165,7 @@ void tlb_set_page_full(CPUState *cpu, int mmu_idx,
&xlat, &sz, full->attrs, &prot);
assert(sz >= TARGET_PAGE_SIZE);
tlb_debug("vaddr=%016" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx
tlb_debug("vaddr=%" VADDR_PRIx " paddr=0x" HWADDR_FMT_plx
" prot=%x idx=%d\n",
addr, full->phys_addr, prot, mmu_idx);
@@ -1363,21 +1363,6 @@ static inline void cpu_transaction_failed(CPUState *cpu, hwaddr physaddr,
}
}
/*
* Save a potentially trashed CPUTLBEntryFull for later lookup by plugin.
* This is read by tlb_plugin_lookup if the fulltlb entry doesn't match
* because of the side effect of io_writex changing memory layout.
*/
static void save_iotlb_data(CPUState *cs, MemoryRegionSection *section,
hwaddr mr_offset)
{
#ifdef CONFIG_PLUGIN
SavedIOTLB *saved = &cs->saved_iotlb;
saved->section = section;
saved->mr_offset = mr_offset;
#endif
}
static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
int mmu_idx, vaddr addr, uintptr_t retaddr,
MMUAccessType access_type, MemOp op)
@@ -1397,12 +1382,6 @@ static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
cpu_io_recompile(cpu, retaddr);
}
/*
* The memory_region_dispatch may trigger a flush/resize
* so for plugins we save the iotlb_data just in case.
*/
save_iotlb_data(cpu, section, mr_offset);
{
QEMU_IOTHREAD_LOCK_GUARD();
r = memory_region_dispatch_read(mr, mr_offset, &val, op, full->attrs);
@@ -1419,6 +1398,21 @@ static uint64_t io_readx(CPUArchState *env, CPUTLBEntryFull *full,
return val;
}
/*
* Save a potentially trashed CPUTLBEntryFull for later lookup by plugin.
* This is read by tlb_plugin_lookup if the fulltlb entry doesn't match
* because of the side effect of io_writex changing memory layout.
*/
static void save_iotlb_data(CPUState *cs, MemoryRegionSection *section,
hwaddr mr_offset)
{
#ifdef CONFIG_PLUGIN
SavedIOTLB *saved = &cs->saved_iotlb;
saved->section = section;
saved->mr_offset = mr_offset;
#endif
}
static void io_writex(CPUArchState *env, CPUTLBEntryFull *full,
int mmu_idx, uint64_t val, vaddr addr,
uintptr_t retaddr, MemOp op)
@@ -1519,14 +1513,13 @@ static int probe_access_internal(CPUArchState *env, vaddr addr,
int fault_size, MMUAccessType access_type,
int mmu_idx, bool nonfault,
void **phost, CPUTLBEntryFull **pfull,
uintptr_t retaddr, bool check_mem_cbs)
uintptr_t retaddr)
{
uintptr_t index = tlb_index(env, mmu_idx, addr);
CPUTLBEntry *entry = tlb_entry(env, mmu_idx, addr);
uint64_t tlb_addr = tlb_read_idx(entry, access_type);
vaddr page_addr = addr & TARGET_PAGE_MASK;
int flags = TLB_FLAGS_MASK & ~TLB_FORCE_SLOW;
bool force_mmio = check_mem_cbs && cpu_plugin_mem_cbs_enabled(env_cpu(env));
CPUTLBEntryFull *full;
if (!tlb_hit_page(tlb_addr, page_addr)) {
@@ -1560,9 +1553,7 @@ static int probe_access_internal(CPUArchState *env, vaddr addr,
flags |= full->slow_flags[access_type];
/* Fold all "mmio-like" bits into TLB_MMIO. This is not RAM. */
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))
||
(access_type != MMU_INST_FETCH && force_mmio)) {
if (unlikely(flags & ~(TLB_WATCHPOINT | TLB_NOTDIRTY))) {
*phost = NULL;
return TLB_MMIO;
}
@@ -1578,7 +1569,7 @@ int probe_access_full(CPUArchState *env, vaddr addr, int size,
uintptr_t retaddr)
{
int flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
nonfault, phost, pfull, retaddr, true);
nonfault, phost, pfull, retaddr);
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
@@ -1589,29 +1580,6 @@ int probe_access_full(CPUArchState *env, vaddr addr, int size,
return flags;
}
int probe_access_full_mmu(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
void **phost, CPUTLBEntryFull **pfull)
{
void *discard_phost;
CPUTLBEntryFull *discard_tlb;
/* privately handle users that don't need full results */
phost = phost ? phost : &discard_phost;
pfull = pfull ? pfull : &discard_tlb;
int flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
true, phost, pfull, 0, false);
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
notdirty_write(env_cpu(env), addr, 1, *pfull, 0);
flags &= ~TLB_NOTDIRTY;
}
return flags;
}
int probe_access_flags(CPUArchState *env, vaddr addr, int size,
MMUAccessType access_type, int mmu_idx,
bool nonfault, void **phost, uintptr_t retaddr)
@@ -1622,7 +1590,7 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
nonfault, phost, &full, retaddr, true);
nonfault, phost, &full, retaddr);
/* Handle clean RAM pages. */
if (unlikely(flags & TLB_NOTDIRTY)) {
@@ -1643,7 +1611,7 @@ void *probe_access(CPUArchState *env, vaddr addr, int size,
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, mmu_idx,
false, &host, &full, retaddr, true);
false, &host, &full, retaddr);
/* Per the interface, size == 0 merely faults the access. */
if (size == 0) {
@@ -1676,7 +1644,7 @@ void *tlb_vaddr_to_host(CPUArchState *env, abi_ptr addr,
int flags;
flags = probe_access_internal(env, addr, 0, access_type,
mmu_idx, true, &host, &full, 0, false);
mmu_idx, true, &host, &full, 0);
/* No combination of flags are expected by the caller. */
return flags ? NULL : host;
@@ -1699,8 +1667,7 @@ tb_page_addr_t get_page_addr_code_hostp(CPUArchState *env, vaddr addr,
void *p;
(void)probe_access_internal(env, addr, 1, MMU_INST_FETCH,
cpu_mmu_index(env, true), false,
&p, &full, 0, false);
cpu_mmu_index(env, true), false, &p, &full, 0);
if (p == NULL) {
return -1;
}
@@ -2072,55 +2039,27 @@ static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
/**
* do_ld_mmio_beN:
* @env: cpu context
* @full: page parameters
* @p: translation parameters
* @ret_be: accumulated data
* @addr: virtual address
* @size: number of bytes
* @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0
* Context: iothread lock held
*
* Load @size bytes from @addr, which is memory-mapped i/o.
* Load @p->size bytes from @p->addr, which is memory-mapped i/o.
* The bytes are concatenated in big-endian order with @ret_be.
*/
static uint64_t do_ld_mmio_beN(CPUArchState *env, CPUTLBEntryFull *full,
uint64_t ret_be, vaddr addr, int size,
int mmu_idx, MMUAccessType type, uintptr_t ra)
static uint64_t do_ld_mmio_beN(CPUArchState *env, MMULookupPageData *p,
uint64_t ret_be, int mmu_idx,
MMUAccessType type, uintptr_t ra)
{
uint64_t t;
CPUTLBEntryFull *full = p->full;
vaddr addr = p->addr;
int i, size = p->size;
tcg_debug_assert(size > 0 && size <= 8);
do {
/* Read aligned pieces up to 8 bytes. */
switch ((size | (int)addr) & 7) {
case 1:
case 3:
case 5:
case 7:
t = io_readx(env, full, mmu_idx, addr, ra, type, MO_UB);
ret_be = (ret_be << 8) | t;
size -= 1;
addr += 1;
break;
case 2:
case 6:
t = io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUW);
ret_be = (ret_be << 16) | t;
size -= 2;
addr += 2;
break;
case 4:
t = io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUL);
ret_be = (ret_be << 32) | t;
size -= 4;
addr += 4;
break;
case 0:
return io_readx(env, full, mmu_idx, addr, ra, type, MO_BEUQ);
default:
qemu_build_not_reached();
}
} while (size);
QEMU_IOTHREAD_LOCK_GUARD();
for (i = 0; i < size; i++) {
uint8_t x = io_readx(env, full, mmu_idx, addr + i, ra, type, MO_UB);
ret_be = (ret_be << 8) | x;
}
return ret_be;
}
@@ -2266,9 +2205,7 @@ static uint64_t do_ld_beN(CPUArchState *env, MMULookupPageData *p,
unsigned tmp, half_size;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
return do_ld_mmio_beN(env, p->full, ret_be, p->addr, p->size,
mmu_idx, type, ra);
return do_ld_mmio_beN(env, p, ret_be, mmu_idx, type, ra);
}
/*
@@ -2317,11 +2254,11 @@ static Int128 do_ld16_beN(CPUArchState *env, MMULookupPageData *p,
MemOp atom;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
a = do_ld_mmio_beN(env, p->full, a, p->addr, size - 8,
mmu_idx, MMU_DATA_LOAD, ra);
b = do_ld_mmio_beN(env, p->full, 0, p->addr + 8, 8,
mmu_idx, MMU_DATA_LOAD, ra);
p->size = size - 8;
a = do_ld_mmio_beN(env, p, a, mmu_idx, MMU_DATA_LOAD, ra);
p->addr += p->size;
p->size = 8;
b = do_ld_mmio_beN(env, p, 0, mmu_idx, MMU_DATA_LOAD, ra);
return int128_make128(b, a);
}
@@ -2376,20 +2313,16 @@ static uint8_t do_ld_1(CPUArchState *env, MMULookupPageData *p, int mmu_idx,
static uint16_t do_ld_2(CPUArchState *env, MMULookupPageData *p, int mmu_idx,
MMUAccessType type, MemOp memop, uintptr_t ra)
{
uint16_t ret;
uint64_t ret;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 2, mmu_idx, type, ra);
if ((memop & MO_BSWAP) == MO_LE) {
ret = bswap16(ret);
}
} else {
/* Perform the load host endian, then swap if necessary. */
ret = load_atom_2(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap16(ret);
}
return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop);
}
/* Perform the load host endian, then swap if necessary. */
ret = load_atom_2(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap16(ret);
}
return ret;
}
@@ -2400,17 +2333,13 @@ static uint32_t do_ld_4(CPUArchState *env, MMULookupPageData *p, int mmu_idx,
uint32_t ret;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 4, mmu_idx, type, ra);
if ((memop & MO_BSWAP) == MO_LE) {
ret = bswap32(ret);
}
} else {
/* Perform the load host endian. */
ret = load_atom_4(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap32(ret);
}
return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop);
}
/* Perform the load host endian. */
ret = load_atom_4(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap32(ret);
}
return ret;
}
@@ -2421,17 +2350,13 @@ static uint64_t do_ld_8(CPUArchState *env, MMULookupPageData *p, int mmu_idx,
uint64_t ret;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
ret = do_ld_mmio_beN(env, p->full, 0, p->addr, 8, mmu_idx, type, ra);
if ((memop & MO_BSWAP) == MO_LE) {
ret = bswap64(ret);
}
} else {
/* Perform the load host endian. */
ret = load_atom_8(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap64(ret);
}
return io_readx(env, p->full, mmu_idx, p->addr, ra, type, memop);
}
/* Perform the load host endian. */
ret = load_atom_8(env, ra, p->haddr, memop);
if (memop & MO_BSWAP) {
ret = bswap64(ret);
}
return ret;
}
@@ -2579,22 +2504,20 @@ static Int128 do_ld16_mmu(CPUArchState *env, vaddr addr,
cpu_req_mo(TCG_MO_LD_LD | TCG_MO_ST_LD);
crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_LOAD, &l);
if (likely(!crosspage)) {
/* Perform the load host endian. */
if (unlikely(l.page[0].flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
a = do_ld_mmio_beN(env, l.page[0].full, 0, addr, 8,
l.mmu_idx, MMU_DATA_LOAD, ra);
b = do_ld_mmio_beN(env, l.page[0].full, 0, addr + 8, 8,
l.mmu_idx, MMU_DATA_LOAD, ra);
ret = int128_make128(b, a);
if ((l.memop & MO_BSWAP) == MO_LE) {
ret = bswap128(ret);
}
a = io_readx(env, l.page[0].full, l.mmu_idx, addr,
ra, MMU_DATA_LOAD, MO_64);
b = io_readx(env, l.page[0].full, l.mmu_idx, addr + 8,
ra, MMU_DATA_LOAD, MO_64);
ret = int128_make128(HOST_BIG_ENDIAN ? b : a,
HOST_BIG_ENDIAN ? a : b);
} else {
/* Perform the load host endian. */
ret = load_atom_16(env, ra, l.page[0].haddr, l.memop);
if (l.memop & MO_BSWAP) {
ret = bswap128(ret);
}
}
if (l.memop & MO_BSWAP) {
ret = bswap128(ret);
}
return ret;
}
@@ -2714,57 +2637,26 @@ Int128 cpu_ld16_mmu(CPUArchState *env, abi_ptr addr,
/**
* do_st_mmio_leN:
* @env: cpu context
* @full: page parameters
* @p: translation parameters
* @val_le: data to store
* @addr: virtual address
* @size: number of bytes
* @mmu_idx: virtual address context
* @ra: return address into tcg generated code, or 0
* Context: iothread lock held
*
* Store @size bytes at @addr, which is memory-mapped i/o.
* Store @p->size bytes at @p->addr, which is memory-mapped i/o.
* The bytes to store are extracted in little-endian order from @val_le;
* return the bytes of @val_le beyond @p->size that have not been stored.
*/
static uint64_t do_st_mmio_leN(CPUArchState *env, CPUTLBEntryFull *full,
uint64_t val_le, vaddr addr, int size,
int mmu_idx, uintptr_t ra)
static uint64_t do_st_mmio_leN(CPUArchState *env, MMULookupPageData *p,
uint64_t val_le, int mmu_idx, uintptr_t ra)
{
tcg_debug_assert(size > 0 && size <= 8);
do {
/* Store aligned pieces up to 8 bytes. */
switch ((size | (int)addr) & 7) {
case 1:
case 3:
case 5:
case 7:
io_writex(env, full, mmu_idx, val_le, addr, ra, MO_UB);
val_le >>= 8;
size -= 1;
addr += 1;
break;
case 2:
case 6:
io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUW);
val_le >>= 16;
size -= 2;
addr += 2;
break;
case 4:
io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUL);
val_le >>= 32;
size -= 4;
addr += 4;
break;
case 0:
io_writex(env, full, mmu_idx, val_le, addr, ra, MO_LEUQ);
return 0;
default:
qemu_build_not_reached();
}
} while (size);
CPUTLBEntryFull *full = p->full;
vaddr addr = p->addr;
int i, size = p->size;
QEMU_IOTHREAD_LOCK_GUARD();
for (i = 0; i < size; i++, val_le >>= 8) {
io_writex(env, full, mmu_idx, val_le, addr + i, ra, MO_UB);
}
return val_le;
}
@@ -2779,9 +2671,7 @@ static uint64_t do_st_leN(CPUArchState *env, MMULookupPageData *p,
unsigned tmp, half_size;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
return do_st_mmio_leN(env, p->full, val_le, p->addr,
p->size, mmu_idx, ra);
return do_st_mmio_leN(env, p, val_le, mmu_idx, ra);
} else if (unlikely(p->flags & TLB_DISCARD_WRITE)) {
return val_le >> (p->size * 8);
}
@@ -2834,11 +2724,11 @@ static uint64_t do_st16_leN(CPUArchState *env, MMULookupPageData *p,
MemOp atom;
if (unlikely(p->flags & TLB_MMIO)) {
QEMU_IOTHREAD_LOCK_GUARD();
do_st_mmio_leN(env, p->full, int128_getlo(val_le),
p->addr, 8, mmu_idx, ra);
return do_st_mmio_leN(env, p->full, int128_gethi(val_le),
p->addr + 8, size - 8, mmu_idx, ra);
p->size = 8;
do_st_mmio_leN(env, p, int128_getlo(val_le), mmu_idx, ra);
p->size = size - 8;
p->addr += 8;
return do_st_mmio_leN(env, p, int128_gethi(val_le), mmu_idx, ra);
} else if (unlikely(p->flags & TLB_DISCARD_WRITE)) {
return int128_gethi(val_le) >> ((size - 8) * 8);
}
@@ -2894,11 +2784,7 @@ static void do_st_2(CPUArchState *env, MMULookupPageData *p, uint16_t val,
int mmu_idx, MemOp memop, uintptr_t ra)
{
if (unlikely(p->flags & TLB_MMIO)) {
if ((memop & MO_BSWAP) != MO_LE) {
val = bswap16(val);
}
QEMU_IOTHREAD_LOCK_GUARD();
do_st_mmio_leN(env, p->full, val, p->addr, 2, mmu_idx, ra);
io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop);
} else if (unlikely(p->flags & TLB_DISCARD_WRITE)) {
/* nothing */
} else {
@@ -2914,11 +2800,7 @@ static void do_st_4(CPUArchState *env, MMULookupPageData *p, uint32_t val,
int mmu_idx, MemOp memop, uintptr_t ra)
{
if (unlikely(p->flags & TLB_MMIO)) {
if ((memop & MO_BSWAP) != MO_LE) {
val = bswap32(val);
}
QEMU_IOTHREAD_LOCK_GUARD();
do_st_mmio_leN(env, p->full, val, p->addr, 4, mmu_idx, ra);
io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop);
} else if (unlikely(p->flags & TLB_DISCARD_WRITE)) {
/* nothing */
} else {
@@ -2934,11 +2816,7 @@ static void do_st_8(CPUArchState *env, MMULookupPageData *p, uint64_t val,
int mmu_idx, MemOp memop, uintptr_t ra)
{
if (unlikely(p->flags & TLB_MMIO)) {
if ((memop & MO_BSWAP) != MO_LE) {
val = bswap64(val);
}
QEMU_IOTHREAD_LOCK_GUARD();
do_st_mmio_leN(env, p->full, val, p->addr, 8, mmu_idx, ra);
io_writex(env, p->full, mmu_idx, val, p->addr, ra, memop);
} else if (unlikely(p->flags & TLB_DISCARD_WRITE)) {
/* nothing */
} else {
@@ -3061,22 +2939,22 @@ static void do_st16_mmu(CPUArchState *env, vaddr addr, Int128 val,
cpu_req_mo(TCG_MO_LD_ST | TCG_MO_ST_ST);
crosspage = mmu_lookup(env, addr, oi, ra, MMU_DATA_STORE, &l);
if (likely(!crosspage)) {
/* Swap to host endian if necessary, then store. */
if (l.memop & MO_BSWAP) {
val = bswap128(val);
}
if (unlikely(l.page[0].flags & TLB_MMIO)) {
if ((l.memop & MO_BSWAP) != MO_LE) {
val = bswap128(val);
}
a = int128_getlo(val);
b = int128_gethi(val);
QEMU_IOTHREAD_LOCK_GUARD();
do_st_mmio_leN(env, l.page[0].full, a, addr, 8, l.mmu_idx, ra);
do_st_mmio_leN(env, l.page[0].full, b, addr + 8, 8, l.mmu_idx, ra);
if (HOST_BIG_ENDIAN) {
b = int128_getlo(val), a = int128_gethi(val);
} else {
a = int128_getlo(val), b = int128_gethi(val);
}
io_writex(env, l.page[0].full, l.mmu_idx, a, addr, ra, MO_64);
io_writex(env, l.page[0].full, l.mmu_idx, b, addr + 8, ra, MO_64);
} else if (unlikely(l.page[0].flags & TLB_DISCARD_WRITE)) {
/* nothing */
} else {
/* Swap to host endian if necessary, then store. */
if (l.memop & MO_BSWAP) {
val = bswap128(val);
}
store_atom_16(env, ra, l.page[0].haddr, l.memop, val);
}
return;
@@ -3200,7 +3078,7 @@ void cpu_st16_mmu(CPUArchState *env, target_ulong addr, Int128 val,
#include "atomic_template.h"
#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif

View File

@@ -10,7 +10,6 @@
#define ACCEL_TCG_INTERNAL_H
#include "exec/exec-all.h"
#include "exec/translate-all.h"
/*
* Access to the various translations structures need to be serialised
@@ -36,32 +35,6 @@ static inline void page_table_config_init(void) { }
void page_table_config_init(void);
#endif
#ifdef CONFIG_USER_ONLY
/*
* For user-only, page_protect sets the page read-only.
* Since most execution is already on read-only pages, and we'd need to
* account for other TBs on the same page, defer undoing any page protection
* until we receive the write fault.
*/
static inline void tb_lock_page0(tb_page_addr_t p0)
{
page_protect(p0);
}
static inline void tb_lock_page1(tb_page_addr_t p0, tb_page_addr_t p1)
{
page_protect(p1);
}
static inline void tb_unlock_page1(tb_page_addr_t p0, tb_page_addr_t p1) { }
static inline void tb_unlock_pages(TranslationBlock *tb) { }
#else
void tb_lock_page0(tb_page_addr_t);
void tb_lock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_page1(tb_page_addr_t, tb_page_addr_t);
void tb_unlock_pages(TranslationBlock *);
#endif
#ifdef CONFIG_SOFTMMU
void tb_invalidate_phys_range_fast(ram_addr_t ram_addr,
unsigned size,
@@ -75,7 +48,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu, vaddr pc,
void page_init(void);
void tb_htable_init(void);
void tb_reset_jump(TranslationBlock *tb, int n);
TranslationBlock *tb_link_page(TranslationBlock *tb);
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2);
bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc);
void cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t host_pc);

View File

@@ -159,11 +159,9 @@ static uint64_t load_atomic8_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*/
WITH_MMAP_LOCK_GUARD() {
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
if (!page_check_range(h2g(pv), 8, PAGE_WRITE_ORG)) {
uint64_t *p = __builtin_assume_aligned(pv, 8);
return *p;
}
#endif
@@ -188,27 +186,25 @@ static Int128 load_atomic16_or_exit(CPUArchState *env, uintptr_t ra, void *pv)
return atomic16_read_ro(p);
}
#ifdef CONFIG_USER_ONLY
/*
* We can only use cmpxchg to emulate a load if the page is writable.
* If the page is not writable, then assume the value is immutable
* and requires no locking. This ignores the case of MAP_SHARED with
* another process, because the fallback start_exclusive solution
* provides no protection across processes.
*
* In system mode all guest pages are writable. For user mode,
* we must take mmap_lock so that the query remains valid until
* the write is complete -- tests/tcg/multiarch/munmap-pthread.c
* is an example that can race.
*/
WITH_MMAP_LOCK_GUARD() {
#ifdef CONFIG_USER_ONLY
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
if (!page_check_range(h2g(p), 16, PAGE_WRITE_ORG)) {
return *p;
}
#endif
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/*
* In system mode all guest pages are writable, and for user-only
* we have just checked writability. Try cmpxchg.
*/
if (HAVE_ATOMIC128_RW) {
return atomic16_read_rw(p);
}
/* Ultimate fallback: re-execute in serial context. */
@@ -404,10 +400,7 @@ static uint16_t load_atom_2(CPUArchState *env, uintptr_t ra,
return load_atomic2(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 2);
}
return load_atom_extract_al16_or_al8(pv, 2);
}
atmax = required_atomicity(env, pi, memop);
@@ -446,10 +439,7 @@ static uint32_t load_atom_4(CPUArchState *env, uintptr_t ra,
return load_atomic4(pv);
}
if (HAVE_ATOMIC128_RO) {
intptr_t left_in_page = -(pi | TARGET_PAGE_MASK);
if (likely(left_in_page > 8)) {
return load_atom_extract_al16_or_al8(pv, 4);
}
return load_atom_extract_al16_or_al8(pv, 4);
}
atmax = required_atomicity(env, pi, memop);

View File

@@ -70,7 +70,17 @@ typedef struct PageDesc PageDesc;
*/
#define assert_page_locked(pd) tcg_debug_assert(have_mmap_lock())
static inline void tb_lock_pages(const TranslationBlock *tb) { }
static inline void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2,
bool alloc)
{
*ret_p1 = NULL;
*ret_p2 = NULL;
}
static inline void page_unlock(PageDesc *pd) { }
static inline void page_lock_tb(const TranslationBlock *tb) { }
static inline void page_unlock_tb(const TranslationBlock *tb) { }
/*
* For user-only, since we are protecting all of memory with a single lock,
@@ -86,7 +96,7 @@ static void tb_remove_all(void)
}
/* Call with mmap_lock held. */
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
vaddr addr;
int flags;
@@ -381,108 +391,12 @@ static void page_lock(PageDesc *pd)
qemu_spin_lock(&pd->lock);
}
/* Like qemu_spin_trylock, returns false on success */
static bool page_trylock(PageDesc *pd)
{
bool busy = qemu_spin_trylock(&pd->lock);
if (!busy) {
page_lock__debug(pd);
}
return busy;
}
static void page_unlock(PageDesc *pd)
{
qemu_spin_unlock(&pd->lock);
page_unlock__debug(pd);
}
void tb_lock_page0(tb_page_addr_t paddr)
{
page_lock(page_find_alloc(paddr >> TARGET_PAGE_BITS, true));
}
void tb_lock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
PageDesc *pd0, *pd1;
if (pindex0 == pindex1) {
/* Identical pages, and the first page is already locked. */
return;
}
pd1 = page_find_alloc(pindex1, true);
if (pindex0 < pindex1) {
/* Correct locking order, we may block. */
page_lock(pd1);
return;
}
/* Incorrect locking order, we cannot block lest we deadlock. */
if (!page_trylock(pd1)) {
return;
}
/*
* Drop the lock on page0 and get both page locks in the right order.
* Restart translation via longjmp.
*/
pd0 = page_find_alloc(pindex0, false);
page_unlock(pd0);
page_lock(pd1);
page_lock(pd0);
siglongjmp(tcg_ctx->jmp_trans, -3);
}
void tb_unlock_page1(tb_page_addr_t paddr0, tb_page_addr_t paddr1)
{
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
}
static void tb_lock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
if (pindex0 < pindex1) {
page_lock(page_find_alloc(pindex0, true));
page_lock(page_find_alloc(pindex1, true));
return;
}
page_lock(page_find_alloc(pindex1, true));
}
page_lock(page_find_alloc(pindex0, true));
}
void tb_unlock_pages(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr1 >> TARGET_PAGE_BITS;
if (unlikely(paddr0 == -1)) {
return;
}
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
page_unlock(page_find_alloc(pindex1, false));
}
page_unlock(page_find_alloc(pindex0, false));
}
static inline struct page_entry *
page_entry_new(PageDesc *pd, tb_page_addr_t index)
{
@@ -506,10 +420,13 @@ static void page_entry_destroy(gpointer p)
/* returns false on success */
static bool page_entry_trylock(struct page_entry *pe)
{
bool busy = page_trylock(pe->pd);
bool busy;
busy = qemu_spin_trylock(&pe->pd->lock);
if (!busy) {
g_assert(!pe->locked);
pe->locked = true;
page_lock__debug(pe->pd);
}
return busy;
}
@@ -687,7 +604,8 @@ static void tb_remove_all(void)
* Add the tb in the target page and protect it if necessary.
* Called with @p->lock held.
*/
static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
static inline void tb_page_add(PageDesc *p, TranslationBlock *tb,
unsigned int n)
{
bool page_already_protected;
@@ -707,21 +625,15 @@ static void tb_page_add(PageDesc *p, TranslationBlock *tb, unsigned int n)
}
}
static void tb_record(TranslationBlock *tb)
static void tb_record(TranslationBlock *tb, PageDesc *p1, PageDesc *p2)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_add(page_find_alloc(pindex1, false), tb, 1);
tb_page_add(p1, tb, 0);
if (unlikely(p2)) {
tb_page_add(p2, tb, 1);
}
tb_page_add(page_find_alloc(pindex0, false), tb, 0);
}
static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static inline void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
{
TranslationBlock *tb1;
uintptr_t *pprev;
@@ -741,16 +653,74 @@ static void tb_page_remove(PageDesc *pd, TranslationBlock *tb)
static void tb_remove(TranslationBlock *tb)
{
tb_page_addr_t paddr0 = tb_page_addr0(tb);
tb_page_addr_t paddr1 = tb_page_addr1(tb);
tb_page_addr_t pindex0 = paddr0 >> TARGET_PAGE_BITS;
tb_page_addr_t pindex1 = paddr0 >> TARGET_PAGE_BITS;
PageDesc *pd;
assert(paddr0 != -1);
if (unlikely(paddr1 != -1) && pindex0 != pindex1) {
tb_page_remove(page_find_alloc(pindex1, false), tb);
pd = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
if (unlikely(tb->page_addr[1] != -1)) {
pd = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
tb_page_remove(pd, tb);
}
}
static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, bool alloc)
{
PageDesc *p1, *p2;
tb_page_addr_t page1;
tb_page_addr_t page2;
assert_memory_lock();
g_assert(phys1 != -1);
page1 = phys1 >> TARGET_PAGE_BITS;
page2 = phys2 >> TARGET_PAGE_BITS;
p1 = page_find_alloc(page1, alloc);
if (ret_p1) {
*ret_p1 = p1;
}
if (likely(phys2 == -1)) {
page_lock(p1);
return;
} else if (page1 == page2) {
page_lock(p1);
if (ret_p2) {
*ret_p2 = p1;
}
return;
}
p2 = page_find_alloc(page2, alloc);
if (ret_p2) {
*ret_p2 = p2;
}
if (page1 < page2) {
page_lock(p1);
page_lock(p2);
} else {
page_lock(p2);
page_lock(p1);
}
}
/* lock the page(s) of a TB in the correct acquisition order */
static void page_lock_tb(const TranslationBlock *tb)
{
page_lock_pair(NULL, tb_page_addr0(tb), NULL, tb_page_addr1(tb), false);
}
static void page_unlock_tb(const TranslationBlock *tb)
{
PageDesc *p1 = page_find(tb_page_addr0(tb) >> TARGET_PAGE_BITS);
page_unlock(p1);
if (unlikely(tb_page_addr1(tb) != -1)) {
PageDesc *p2 = page_find(tb_page_addr1(tb) >> TARGET_PAGE_BITS);
if (p2 != p1) {
page_unlock(p2);
}
}
tb_page_remove(page_find_alloc(pindex0, false), tb);
}
#endif /* CONFIG_USER_ONLY */
@@ -955,16 +925,18 @@ static void tb_phys_invalidate__locked(TranslationBlock *tb)
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{
if (page_addr == -1 && tb_page_addr0(tb) != -1) {
tb_lock_pages(tb);
page_lock_tb(tb);
do_tb_phys_invalidate(tb, true);
tb_unlock_pages(tb);
page_unlock_tb(tb);
} else {
do_tb_phys_invalidate(tb, false);
}
}
/*
* Add a new TB and link it to the physical page tables.
* Add a new TB and link it to the physical page tables. phys_page2 is
* (-1) to indicate that only one page contains the TB.
*
* Called with mmap_lock held for user-mode emulation.
*
* Returns a pointer @tb, or a pointer to an existing TB that matches @tb.
@@ -972,29 +944,43 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
* for the same block of guest code that @tb corresponds to. In that case,
* the caller should discard the original @tb, and use instead the returned TB.
*/
TranslationBlock *tb_link_page(TranslationBlock *tb)
TranslationBlock *tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb_page_addr_t phys_page2)
{
PageDesc *p;
PageDesc *p2 = NULL;
void *existing_tb = NULL;
uint32_t h;
assert_memory_lock();
tcg_debug_assert(!(tb->cflags & CF_INVALID));
tb_record(tb);
/*
* Add the TB to the page list, acquiring first the pages's locks.
* We keep the locks held until after inserting the TB in the hash table,
* so that if the insertion fails we know for sure that the TBs are still
* in the page descriptors.
* Note that inserting into the hash table first isn't an option, since
* we can only insert TBs that are fully initialized.
*/
page_lock_pair(&p, phys_pc, &p2, phys_page2, true);
tb_record(tb, p, p2);
/* add in the hash table */
h = tb_hash_func(tb_page_addr0(tb), (tb->cflags & CF_PCREL ? 0 : tb->pc),
h = tb_hash_func(phys_pc, (tb->cflags & CF_PCREL ? 0 : tb->pc),
tb->flags, tb->cs_base, tb->cflags);
qht_insert(&tb_ctx.htable, tb, h, &existing_tb);
/* remove TB from the page(s) if we couldn't insert it */
if (unlikely(existing_tb)) {
tb_remove(tb);
tb_unlock_pages(tb);
return existing_tb;
tb = existing_tb;
}
tb_unlock_pages(tb);
if (p2 && p2 != p) {
page_unlock(p2);
}
page_unlock(p);
return tb;
}
@@ -1083,8 +1069,7 @@ bool tb_invalidate_phys_page_unwind(tb_page_addr_t addr, uintptr_t pc)
if (current_tb_modified) {
/* Force execution of one insn next time. */
CPUState *cpu = current_cpu;
cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
return true;
}
return false;
@@ -1107,9 +1092,6 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
TranslationBlock *current_tb = retaddr ? tcg_tb_lookup(retaddr) : NULL;
#endif /* TARGET_HAS_PRECISE_SMC */
/* Range may not cross a page. */
tcg_debug_assert(((start ^ last) & TARGET_PAGE_MASK) == 0);
/*
* We remove all the TBs in the range [start, last].
* XXX: see if in some cases it could be faster to invalidate all the code
@@ -1154,8 +1136,7 @@ tb_invalidate_phys_page_range__locked(struct page_collection *pages,
if (current_tb_modified) {
page_collection_unlock(pages);
/* Force execution of one insn next time. */
current_cpu->cflags_next_tb =
1 | CF_LAST_IO | CF_NOIRQ | curr_cflags(current_cpu);
current_cpu->cflags_next_tb = 1 | CF_NOIRQ | curr_cflags(current_cpu);
mmap_unlock();
cpu_loop_exit_noexc(current_cpu);
}
@@ -1201,17 +1182,15 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t last)
index_last = last >> TARGET_PAGE_BITS;
for (index = start >> TARGET_PAGE_BITS; index <= index_last; index++) {
PageDesc *pd = page_find(index);
tb_page_addr_t page_start, page_last;
tb_page_addr_t bound;
if (pd == NULL) {
continue;
}
assert_page_locked(pd);
page_start = index << TARGET_PAGE_BITS;
page_last = page_start | ~TARGET_PAGE_MASK;
page_last = MIN(page_last, last);
tb_invalidate_phys_page_range__locked(pages, pd,
page_start, page_last, 0);
bound = (index << TARGET_PAGE_BITS) | ~TARGET_PAGE_MASK;
bound = MIN(bound, last);
tb_invalidate_phys_page_range__locked(pages, pd, start, bound, 0);
}
page_collection_unlock(pages);
}

View File

@@ -100,9 +100,14 @@ static void *mttcg_cpu_thread_fn(void *arg)
break;
case EXCP_HALTED:
/*
* Usually cpu->halted is set, but may have already been
* reset by another thread by the time we arrive here.
* during start-up the vCPU is reset and the thread is
* kicked several times. If we don't ensure we go back
* to sleep in the halted state we won't cleanly
* start-up when the vCPU is enabled.
*
* cpu->halted should ensure we sleep in wait_io_event
*/
g_assert(cpu->halted);
break;
case EXCP_ATOMIC:
qemu_mutex_unlock_iothread();
@@ -147,4 +152,8 @@ void mttcg_start_vcpu_thread(CPUState *cpu)
qemu_thread_create(cpu->thread, thread_name, mttcg_cpu_thread_fn,
cpu, QEMU_THREAD_JOINABLE);
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
}

View File

@@ -329,6 +329,9 @@ void rr_start_vcpu_thread(CPUState *cpu)
single_tcg_halt_cond = cpu->halt_cond;
single_tcg_cpu_thread = cpu->thread;
#ifdef _WIN32
cpu->hThread = qemu_thread_get_handle(cpu->thread);
#endif
} else {
/* we share the thread */
cpu->thread = single_tcg_cpu_thread;

View File

@@ -58,7 +58,7 @@ DEF_HELPER_FLAGS_5(atomic_cmpxchgq_be, TCG_CALL_NO_WG,
DEF_HELPER_FLAGS_5(atomic_cmpxchgq_le, TCG_CALL_NO_WG,
i64, env, i64, i64, i64, i32)
#endif
#if HAVE_CMPXCHG128
#ifdef CONFIG_CMPXCHG128
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_be, TCG_CALL_NO_WG,
i128, env, i64, i128, i128, i32)
DEF_HELPER_FLAGS_5(atomic_cmpxchgo_le, TCG_CALL_NO_WG,

View File

@@ -290,7 +290,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
{
CPUArchState *env = cpu->env_ptr;
TranslationBlock *tb, *existing_tb;
tb_page_addr_t phys_pc, phys_p2;
tb_page_addr_t phys_pc;
tcg_insn_unit *gen_code_buf;
int gen_code_size, search_size, max_insns;
int64_t ti;
@@ -313,7 +313,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
QEMU_BUILD_BUG_ON(CF_COUNT_MASK + 1 != TCG_MAX_INSNS);
buffer_overflow:
assert_no_pages_locked();
tb = tcg_tb_alloc(tcg_ctx);
if (unlikely(!tb)) {
/* flush must be done */
@@ -334,10 +333,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tb->cflags = cflags;
tb_set_page_addr0(tb, phys_pc);
tb_set_page_addr1(tb, -1);
if (phys_pc != -1) {
tb_lock_page0(phys_pc);
}
tcg_ctx->gen_tb = tb;
tcg_ctx->addr_type = TARGET_LONG_BITS == 32 ? TCG_TYPE_I32 : TCG_TYPE_I64;
#ifdef CONFIG_SOFTMMU
@@ -354,7 +349,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
tcg_ctx->guest_mo = TCG_MO_ALL;
#endif
restart_translate:
tb_overflow:
trace_translate_block(tb, pc, tb->tc.ptr);
gen_code_size = setjmp_gen_code(env, tb, pc, host_pc, &max_insns, &ti);
@@ -373,8 +369,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation for "
"code_gen_buffer overflow\n");
tb_unlock_pages(tb);
tcg_ctx->gen_tb = NULL;
goto buffer_overflow;
case -2:
@@ -393,39 +387,14 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
"Restarting code generation with "
"smaller translation block (max %d insns)\n",
max_insns);
/*
* The half-sized TB may not cross pages.
* TODO: Fix all targets that cross pages except with
* the first insn, at which point this can't be reached.
*/
phys_p2 = tb_page_addr1(tb);
if (unlikely(phys_p2 != -1)) {
tb_unlock_page1(phys_pc, phys_p2);
tb_set_page_addr1(tb, -1);
}
goto restart_translate;
case -3:
/*
* We had a page lock ordering problem. In order to avoid
* deadlock we had to drop the lock on page0, which means
* that everything we translated so far is compromised.
* Restart with locks held on both pages.
*/
qemu_log_mask(CPU_LOG_TB_OP | CPU_LOG_TB_OP_OPT,
"Restarting code generation with re-locked pages");
goto restart_translate;
goto tb_overflow;
default:
g_assert_not_reached();
}
}
tcg_ctx->gen_tb = NULL;
search_size = encode_search(tb, (void *)gen_code_buf + gen_code_size);
if (unlikely(search_size < 0)) {
tb_unlock_pages(tb);
goto buffer_overflow;
}
tb->tc.size = gen_code_size;
@@ -535,7 +504,6 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* before attempting to link to other TBs or add to the lookup table.
*/
if (tb_page_addr0(tb) == -1) {
assert_no_pages_locked();
return tb;
}
@@ -550,9 +518,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* No explicit memory barrier is required -- tb_link_page() makes the
* TB visible in a consistent state.
*/
existing_tb = tb_link_page(tb);
assert_no_pages_locked();
existing_tb = tb_link_page(tb, tb_page_addr0(tb), tb_page_addr1(tb));
/* if the TB already exists, discard what we just translated */
if (unlikely(existing_tb != tb)) {
uintptr_t orig_aligned = (uintptr_t)gen_code_buf;
@@ -638,7 +604,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
if (qemu_loglevel_mask(CPU_LOG_EXEC)) {
vaddr pc = log_pc(cpu, tb);
if (qemu_log_in_addr_range(pc)) {
qemu_log("cpu_io_recompile: rewound execution of TB to %016"
qemu_log("cpu_io_recompile: rewound execution of TB to %"
VADDR_PRIx "\n", pc);
}
}

View File

@@ -12,23 +12,30 @@
#include "qemu/error-report.h"
#include "exec/exec-all.h"
#include "exec/translator.h"
#include "exec/translate-all.h"
#include "exec/plugin-gen.h"
#include "tcg/tcg-op-common.h"
#include "internal.h"
static void set_can_do_io(DisasContextBase *db, bool val)
static void gen_io_start(void)
{
if (db->saved_can_do_io != val) {
db->saved_can_do_io = val;
tcg_gen_st_i32(tcg_constant_i32(val), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
tcg_gen_st_i32(tcg_constant_i32(1), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
bool translator_io_start(DisasContextBase *db)
{
set_can_do_io(db, true);
uint32_t cflags = tb_cflags(db->tb);
if (!(cflags & CF_USE_ICOUNT)) {
return false;
}
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Already started in translator_loop. */
return true;
}
gen_io_start();
/*
* Ensure that this instruction will be the last in the TB.
@@ -40,17 +47,14 @@ bool translator_io_start(DisasContextBase *db)
return true;
}
static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
static TCGOp *gen_tb_start(uint32_t cflags)
{
TCGv_i32 count = NULL;
TCGv_i32 count = tcg_temp_new_i32();
TCGOp *icount_start_insn = NULL;
if ((cflags & CF_USE_ICOUNT) || !(cflags & CF_NOIRQ)) {
count = tcg_temp_new_i32();
tcg_gen_ld_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
}
tcg_gen_ld_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u32) -
offsetof(ArchCPU, env));
if (cflags & CF_USE_ICOUNT) {
/*
@@ -80,15 +84,18 @@ static TCGOp *gen_tb_start(DisasContextBase *db, uint32_t cflags)
tcg_gen_st16_i32(count, cpu_env,
offsetof(ArchCPU, neg.icount_decr.u16.low) -
offsetof(ArchCPU, env));
/*
* cpu->can_do_io is cleared automatically here at the beginning of
* each translation block. The cost is minimal and only paid for
* -icount, plus it would be very easy to forget doing it in the
* translator. Doing it here means we don't need a gen_io_end() to
* go with gen_io_start().
*/
tcg_gen_st_i32(tcg_constant_i32(0), cpu_env,
offsetof(ArchCPU, parent_obj.can_do_io) -
offsetof(ArchCPU, env));
}
/*
* cpu->can_do_io is set automatically here at the beginning of
* each translation block. The cost is minimal, plus it would be
* very easy to forget doing it in the translator.
*/
set_can_do_io(db, db->max_insns == 1 && (cflags & CF_LAST_IO));
return icount_start_insn;
}
@@ -137,25 +144,22 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
db->num_insns = 0;
db->max_insns = *max_insns;
db->singlestep_enabled = cflags & CF_SINGLE_STEP;
db->saved_can_do_io = -1;
db->host_addr[0] = host_pc;
db->host_addr[1] = NULL;
#ifdef CONFIG_USER_ONLY
page_protect(pc);
#endif
ops->init_disas_context(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
/* Start translating. */
icount_start_insn = gen_tb_start(db, cflags);
icount_start_insn = gen_tb_start(cflags);
ops->tb_start(db, cpu);
tcg_debug_assert(db->is_jmp == DISAS_NEXT); /* no early exit */
if (cflags & CF_MEMI_ONLY) {
/* We should only see CF_MEMI_ONLY for io_recompile. */
assert(cflags & CF_LAST_IO);
plugin_enabled = plugin_gen_tb_start(cpu, db, true);
} else {
plugin_enabled = plugin_gen_tb_start(cpu, db, false);
}
plugin_enabled = plugin_gen_tb_start(cpu, db, cflags & CF_MEMI_ONLY);
while (true) {
*max_insns = ++db->num_insns;
@@ -172,9 +176,13 @@ void translator_loop(CPUState *cpu, TranslationBlock *tb, int *max_insns,
the next instruction. */
if (db->num_insns == db->max_insns && (cflags & CF_LAST_IO)) {
/* Accept I/O on the last instruction. */
set_can_do_io(db, true);
gen_io_start();
ops->translate_insn(db, cpu);
} else {
/* we should only see CF_MEMI_ONLY for io_recompile */
tcg_debug_assert(!(cflags & CF_MEMI_ONLY));
ops->translate_insn(db, cpu);
}
ops->translate_insn(db, cpu);
/*
* We can't instrument after instructions that change control
@@ -248,36 +256,22 @@ static void *translator_access(CPUArchState *env, DisasContextBase *db,
host = db->host_addr[1];
base = TARGET_PAGE_ALIGN(db->pc_first);
if (host == NULL) {
tb_page_addr_t page0, old_page1, new_page1;
new_page1 = get_page_addr_code_hostp(env, base, &db->host_addr[1]);
tb_page_addr_t phys_page =
get_page_addr_code_hostp(env, base, &db->host_addr[1]);
/*
* If the second page is MMIO, treat as if the first page
* was MMIO as well, so that we do not cache the TB.
*/
if (unlikely(new_page1 == -1)) {
tb_unlock_pages(tb);
if (unlikely(phys_page == -1)) {
tb_set_page_addr0(tb, -1);
return NULL;
}
/*
* If this is not the first time around, and page1 matches,
* then we already have the page locked. Alternately, we're
* not doing anything to prevent the PTE from changing, so
* we might wind up with a different page, requiring us to
* re-do the locking.
*/
old_page1 = tb_page_addr1(tb);
if (likely(new_page1 != old_page1)) {
page0 = tb_page_addr0(tb);
if (unlikely(old_page1 != -1)) {
tb_unlock_page1(page0, old_page1);
}
tb_set_page_addr1(tb, new_page1);
tb_lock_page1(page0, new_page1);
}
tb_set_page_addr1(tb, phys_page);
#ifdef CONFIG_USER_ONLY
page_protect(end);
#endif
host = db->host_addr[1];
}

View File

@@ -144,7 +144,7 @@ typedef struct PageFlagsNode {
static IntervalTreeRoot pageflags_root;
static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
static PageFlagsNode *pageflags_find(target_ulong start, target_long last)
{
IntervalTreeNode *n;
@@ -153,7 +153,7 @@ static PageFlagsNode *pageflags_find(target_ulong start, target_ulong last)
}
static PageFlagsNode *pageflags_next(PageFlagsNode *p, target_ulong start,
target_ulong last)
target_long last)
{
IntervalTreeNode *n;
@@ -520,19 +520,19 @@ void page_set_flags(target_ulong start, target_ulong last, int flags)
}
}
bool page_check_range(target_ulong start, target_ulong len, int flags)
int page_check_range(target_ulong start, target_ulong len, int flags)
{
target_ulong last;
int locked; /* tri-state: =0: unlocked, +1: global, -1: local */
bool ret;
int ret;
if (len == 0) {
return true; /* trivial length */
return 0; /* trivial length */
}
last = start + len - 1;
if (last < start) {
return false; /* wrap around */
return -1; /* wrap around */
}
locked = have_mmap_lock();
@@ -551,33 +551,33 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
p = pageflags_find(start, last);
}
if (!p) {
ret = false; /* entire region invalid */
ret = -1; /* entire region invalid */
break;
}
}
if (start < p->itree.start) {
ret = false; /* initial bytes invalid */
ret = -1; /* initial bytes invalid */
break;
}
missing = flags & ~p->flags;
if (missing & ~PAGE_WRITE) {
ret = false; /* page doesn't match */
if (missing & PAGE_READ) {
ret = -1; /* page not readable */
break;
}
if (missing & PAGE_WRITE) {
if (!(p->flags & PAGE_WRITE_ORG)) {
ret = false; /* page not writable */
ret = -1; /* page not writable */
break;
}
/* Asking about writable, but has been protected: undo. */
if (!page_unprotect(start, 0)) {
ret = false;
ret = -1;
break;
}
/* TODO: page_unprotect should take a range, not a single page. */
if (last - start < TARGET_PAGE_SIZE) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start += TARGET_PAGE_SIZE;
@@ -585,7 +585,7 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
}
if (last <= p->itree.last) {
ret = true; /* ok */
ret = 0; /* ok */
break;
}
start = p->itree.last + 1;
@@ -598,54 +598,6 @@ bool page_check_range(target_ulong start, target_ulong len, int flags)
return ret;
}
bool page_check_range_empty(target_ulong start, target_ulong last)
{
assert(last >= start);
assert_memory_lock();
return pageflags_find(start, last) == NULL;
}
target_ulong page_find_range_empty(target_ulong min, target_ulong max,
target_ulong len, target_ulong align)
{
target_ulong len_m1, align_m1;
assert(min <= max);
assert(max <= GUEST_ADDR_MAX);
assert(len != 0);
assert(is_power_of_2(align));
assert_memory_lock();
len_m1 = len - 1;
align_m1 = align - 1;
/* Iteratively narrow the search region. */
while (1) {
PageFlagsNode *p;
/* Align min and double-check there's enough space remaining. */
min = (min + align_m1) & ~align_m1;
if (min > max) {
return -1;
}
if (len_m1 > max - min) {
return -1;
}
p = pageflags_find(min, min + len_m1);
if (p == NULL) {
/* Found! */
return min;
}
if (max <= p->itree.last) {
/* Existing allocation fills the remainder of the search region. */
return -1;
}
/* Skip across existing allocation. */
min = p->itree.last + 1;
}
}
void page_protect(tb_page_addr_t address)
{
PageFlagsNode *p;
@@ -793,10 +745,6 @@ static int probe_access_internal(CPUArchState *env, vaddr addr,
if (guest_addr_valid_untagged(addr)) {
int page_flags = page_get_flags(addr);
if (page_flags & acc_flag) {
if ((acc_flag == PAGE_READ || acc_flag == PAGE_WRITE)
&& cpu_plugin_mem_cbs_enabled(env_cpu(env))) {
return TLB_MMIO;
}
return 0; /* success */
}
maperr = !(page_flags & PAGE_VALID);
@@ -819,7 +767,7 @@ int probe_access_flags(CPUArchState *env, vaddr addr, int size,
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, nonfault, ra);
*phost = (flags & TLB_INVALID_MASK) ? NULL : g2h(env_cpu(env), addr);
*phost = flags ? NULL : g2h(env_cpu(env), addr);
return flags;
}
@@ -830,7 +778,7 @@ void *probe_access(CPUArchState *env, vaddr addr, int size,
g_assert(-(addr | TARGET_PAGE_MASK) >= size);
flags = probe_access_internal(env, addr, size, access_type, false, ra);
g_assert((flags & ~TLB_MMIO) == 0);
g_assert(flags == 0);
return size ? g2h(env_cpu(env), addr) : NULL;
}
@@ -1433,7 +1381,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, vaddr addr, MemOpIdx oi,
#include "atomic_template.h"
#endif
#if defined(CONFIG_ATOMIC128) || HAVE_CMPXCHG128
#if defined(CONFIG_ATOMIC128) || defined(CONFIG_CMPXCHG128)
#define DATA_SIZE 16
#include "atomic_template.h"
#endif

View File

@@ -29,11 +29,7 @@
#include "qemu/timer.h"
#include "qemu/dbus.h"
#ifdef G_OS_UNIX
#include <gio/gunixfdlist.h>
#endif
#include "ui/dbus.h"
#include "ui/dbus-display1.h"
#define AUDIO_CAP "dbus"
@@ -448,9 +444,7 @@ listener_in_vanished_cb(GDBusConnection *connection,
static gboolean
dbus_audio_register_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener,
bool out)
{
@@ -477,11 +471,6 @@ dbus_audio_register_listener(AudioState *s,
return DBUS_METHOD_INVOCATION_HANDLED;
}
#ifdef G_OS_WIN32
if (!dbus_win32_import_socket(invocation, arg_listener, &fd)) {
return DBUS_METHOD_INVOCATION_HANDLED;
}
#else
fd = g_unix_fd_list_get(fd_list, g_variant_get_handle(arg_listener), &err);
if (err) {
g_dbus_method_invocation_return_error(invocation,
@@ -491,7 +480,6 @@ dbus_audio_register_listener(AudioState *s,
err->message);
return DBUS_METHOD_INVOCATION_HANDLED;
}
#endif
socket = g_socket_new_from_fd(fd, &err);
if (err) {
@@ -500,28 +488,15 @@ dbus_audio_register_listener(AudioState *s,
DBUS_DISPLAY_ERROR_FAILED,
"Couldn't make a socket: %s",
err->message);
#ifdef G_OS_WIN32
closesocket(fd);
#else
close(fd);
#endif
return DBUS_METHOD_INVOCATION_HANDLED;
}
socket_conn = g_socket_connection_factory_create_connection(socket);
if (out) {
qemu_dbus_display1_audio_complete_register_out_listener(
da->iface, invocation
#ifdef G_OS_UNIX
, NULL
#endif
);
da->iface, invocation, NULL);
} else {
qemu_dbus_display1_audio_complete_register_in_listener(
da->iface, invocation
#ifdef G_OS_UNIX
, NULL
#endif
);
da->iface, invocation, NULL);
}
listener_conn =
@@ -599,32 +574,22 @@ dbus_audio_register_listener(AudioState *s,
static gboolean
dbus_audio_register_out_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener)
{
return dbus_audio_register_listener(s, invocation,
#ifdef G_OS_UNIX
fd_list,
#endif
arg_listener, true);
fd_list, arg_listener, true);
}
static gboolean
dbus_audio_register_in_listener(AudioState *s,
GDBusMethodInvocation *invocation,
#ifdef G_OS_UNIX
GUnixFDList *fd_list,
#endif
GVariant *arg_listener)
{
return dbus_audio_register_listener(s, invocation,
#ifdef G_OS_UNIX
fd_list,
#endif
arg_listener, false);
fd_list, arg_listener, false);
}
static void

View File

@@ -31,7 +31,7 @@ endforeach
if dbus_display
module_ss = ss.source_set()
module_ss.add(when: [gio, pixman], if_true: files('dbusaudio.c'))
module_ss.add(when: gio, if_true: files('dbusaudio.c'))
audio_modules += {'dbus': module_ss}
endif

View File

@@ -1,5 +1,5 @@
/*
* QEMU PipeWire audio driver
* QEMU Pipewire audio driver
*
* Copyright (c) 2023 Red Hat Inc.
*
@@ -66,9 +66,6 @@ typedef struct PWVoiceIn {
PWVoice v;
} PWVoiceIn;
#define PW_VOICE_IN(v) ((PWVoiceIn *)v)
#define PW_VOICE_OUT(v) ((PWVoiceOut *)v)
static void
stream_destroy(void *data)
{
@@ -200,6 +197,16 @@ on_stream_state_changed(void *data, enum pw_stream_state old,
trace_pw_state_changed(pw_stream_get_node_id(v->stream),
pw_stream_state_as_string(state));
switch (state) {
case PW_STREAM_STATE_ERROR:
case PW_STREAM_STATE_UNCONNECTED:
break;
case PW_STREAM_STATE_PAUSED:
case PW_STREAM_STATE_CONNECTING:
case PW_STREAM_STATE_STREAMING:
break;
}
}
static const struct pw_stream_events capture_stream_events = {
@@ -417,8 +424,8 @@ pw_to_audfmt(enum spa_audio_format fmt, int *endianness,
}
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
create_stream(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
int res;
uint32_t n_params;
@@ -429,10 +436,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
struct pw_properties *props;
props = pw_properties_new(NULL, NULL);
if (!props) {
error_report("Failed to create PW properties: %s", g_strerror(errno));
return -1;
}
/* 75% of the timer period for faster updates */
buf_samples = (uint64_t)v->g->dev->timer_period * v->info.rate
@@ -445,8 +448,8 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
pw_properties_set(props, PW_KEY_TARGET_OBJECT, name);
}
v->stream = pw_stream_new(c->core, stream_name, props);
if (v->stream == NULL) {
error_report("Failed to create PW stream: %s", g_strerror(errno));
return -1;
}
@@ -474,7 +477,6 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
PW_STREAM_FLAG_MAP_BUFFERS |
PW_STREAM_FLAG_RT_PROCESS, params, n_params);
if (res < 0) {
error_report("Failed to connect PW stream: %s", g_strerror(errno));
pw_stream_destroy(v->stream);
return -1;
}
@@ -482,37 +484,71 @@ qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
return 0;
}
static void
qpw_set_position(uint32_t channels, uint32_t position[SPA_AUDIO_MAX_CHANNELS])
static int
qpw_stream_new(pwaudio *c, PWVoice *v, const char *stream_name,
const char *name, enum spa_direction dir)
{
memcpy(position, (uint32_t[SPA_AUDIO_MAX_CHANNELS]) { SPA_AUDIO_CHANNEL_UNKNOWN, },
sizeof(uint32_t) * SPA_AUDIO_MAX_CHANNELS);
/*
* TODO: This currently expects the only frontend supporting more than 2
* channels is the usb-audio. We will need some means to set channel
* order when a new frontend gains multi-channel support.
*/
switch (channels) {
int r;
switch (v->info.channels) {
case 8:
position[6] = SPA_AUDIO_CHANNEL_SL;
position[7] = SPA_AUDIO_CHANNEL_SR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
v->info.position[6] = SPA_AUDIO_CHANNEL_SL;
v->info.position[7] = SPA_AUDIO_CHANNEL_SR;
break;
case 6:
position[2] = SPA_AUDIO_CHANNEL_FC;
position[3] = SPA_AUDIO_CHANNEL_LFE;
position[4] = SPA_AUDIO_CHANNEL_RL;
position[5] = SPA_AUDIO_CHANNEL_RR;
/* fallthrough */
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RL;
v->info.position[5] = SPA_AUDIO_CHANNEL_RR;
break;
case 5:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_LFE;
v->info.position[4] = SPA_AUDIO_CHANNEL_RC;
break;
case 4:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_FC;
v->info.position[3] = SPA_AUDIO_CHANNEL_RC;
break;
case 3:
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[2] = SPA_AUDIO_CHANNEL_LFE;
break;
case 2:
position[0] = SPA_AUDIO_CHANNEL_FL;
position[1] = SPA_AUDIO_CHANNEL_FR;
v->info.position[0] = SPA_AUDIO_CHANNEL_FL;
v->info.position[1] = SPA_AUDIO_CHANNEL_FR;
break;
case 1:
position[0] = SPA_AUDIO_CHANNEL_MONO;
v->info.position[0] = SPA_AUDIO_CHANNEL_MONO;
break;
default:
dolog("Internal error: unsupported channel count %d\n", channels);
for (size_t i = 0; i < v->info.channels; i++) {
v->info.position[i] = SPA_AUDIO_CHANNEL_UNKNOWN;
}
break;
}
/* create a new unconnected pwstream */
r = create_stream(c, v, stream_name, name, dir);
if (r < 0) {
AUD_log(AUDIO_CAP, "Failed to create stream.");
return -1;
}
return r;
}
static int
@@ -530,7 +566,6 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -544,6 +579,7 @@ qpw_init_out(HWVoiceOut *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_OUTPUT);
if (r < 0) {
error_report("qpw_stream_new for playback failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -577,7 +613,6 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
v->info.format = audfmt_to_pw(as->fmt, as->endianness);
v->info.channels = as->nchannels;
qpw_set_position(as->nchannels, v->info.position);
v->info.rate = as->freq;
obt_as.fmt =
@@ -588,6 +623,7 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
r = qpw_stream_new(c, v, ppdo->stream_name ? : c->dev->id,
ppdo->name, SPA_DIRECTION_INPUT);
if (r < 0) {
error_report("qpw_stream_new for recording failed");
pw_thread_loop_unlock(c->thread_loop);
return -1;
}
@@ -603,56 +639,63 @@ qpw_init_in(HWVoiceIn *hw, struct audsettings *as, void *drv_opaque)
return 0;
}
static void
qpw_voice_fini(PWVoice *v)
{
pwaudio *c = v->g;
if (!v->stream) {
return;
}
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_fini_out(HWVoiceOut *hw)
{
qpw_voice_fini(&PW_VOICE_OUT(hw)->v);
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_fini_in(HWVoiceIn *hw)
{
qpw_voice_fini(&PW_VOICE_IN(hw)->v);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
if (v->stream) {
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_destroy(v->stream);
v->stream = NULL;
pw_thread_loop_unlock(c->thread_loop);
}
}
static void
qpw_voice_set_enabled(PWVoice *v, bool enable)
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
PWVoiceOut *po = (PWVoiceOut *) hw;
PWVoice *v = &po->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_enable_out(HWVoiceOut *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_OUT(hw)->v, enable);
}
static void
qpw_enable_in(HWVoiceIn *hw, bool enable)
{
qpw_voice_set_enabled(&PW_VOICE_IN(hw)->v, enable);
PWVoiceIn *pi = (PWVoiceIn *) hw;
PWVoice *v = &pi->v;
pwaudio *c = v->g;
pw_thread_loop_lock(c->thread_loop);
pw_stream_set_active(v->stream, enable);
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_voice_set_volume(PWVoice *v, Volume *vol)
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
PWVoiceOut *pw = (PWVoiceOut *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
@@ -673,16 +716,29 @@ qpw_voice_set_volume(PWVoice *v, Volume *vol)
pw_thread_loop_unlock(c->thread_loop);
}
static void
qpw_volume_out(HWVoiceOut *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_OUT(hw)->v, vol);
}
static void
qpw_volume_in(HWVoiceIn *hw, Volume *vol)
{
qpw_voice_set_volume(&PW_VOICE_IN(hw)->v, vol);
PWVoiceIn *pw = (PWVoiceIn *) hw;
PWVoice *v = &pw->v;
pwaudio *c = v->g;
int i, ret;
pw_thread_loop_lock(c->thread_loop);
v->volume.channels = vol->channels;
for (i = 0; i < vol->channels; ++i) {
v->volume.values[i] = (float)vol->vol[i] / 255;
}
ret = pw_stream_set_control(v->stream,
SPA_PROP_channelVolumes, v->volume.channels, v->volume.values, 0);
trace_pw_vol(ret == 0 ? "success" : "failed");
v->muted = vol->mute;
float val = v->muted ? 1.f : 0.f;
ret = pw_stream_set_control(v->stream, SPA_PROP_mute, 1, &val, 0);
pw_thread_loop_unlock(c->thread_loop);
}
static int wait_resync(pwaudio *pw)
@@ -704,7 +760,6 @@ static int wait_resync(pwaudio *pw)
}
return 0;
}
static void
on_core_error(void *data, uint32_t id, int seq, int res, const char *message)
{
@@ -739,28 +794,27 @@ static void *
qpw_audio_init(Audiodev *dev)
{
g_autofree pwaudio *pw = g_new0(pwaudio, 1);
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
trace_pw_audio_init();
pw_init(NULL, NULL);
trace_pw_audio_init();
assert(dev->driver == AUDIODEV_DRIVER_PIPEWIRE);
pw->dev = dev;
pw->thread_loop = pw_thread_loop_new("PipeWire thread loop", NULL);
pw->thread_loop = pw_thread_loop_new("Pipewire thread loop", NULL);
if (pw->thread_loop == NULL) {
error_report("Could not create PipeWire loop: %s", g_strerror(errno));
error_report("Could not create Pipewire loop");
goto fail;
}
pw->context =
pw_context_new(pw_thread_loop_get_loop(pw->thread_loop), NULL, 0);
if (pw->context == NULL) {
error_report("Could not create PipeWire context: %s", g_strerror(errno));
error_report("Could not create Pipewire context");
goto fail;
}
if (pw_thread_loop_start(pw->thread_loop) < 0) {
error_report("Could not start PipeWire loop: %s", g_strerror(errno));
error_report("Could not start Pipewire loop");
goto fail;
}
@@ -790,8 +844,12 @@ fail:
if (pw->thread_loop) {
pw_thread_loop_stop(pw->thread_loop);
}
g_clear_pointer(&pw->context, pw_context_destroy);
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
if (pw->context) {
g_clear_pointer(&pw->context, pw_context_destroy);
}
if (pw->thread_loop) {
g_clear_pointer(&pw->thread_loop, pw_thread_loop_destroy);
}
return NULL;
}

View File

@@ -24,7 +24,7 @@ pw_read(int32_t avail, uint32_t index, size_t len) "avail=%d index=%u len=%zu"
pw_write(int32_t filled, int32_t avail, uint32_t index, size_t len) "filled=%d avail=%d index=%u len=%zu"
pw_vol(const char *ret) "set volume: %s"
pw_period(uint64_t quantum, uint32_t rate) "period =%" PRIu64 "/%u"
pw_audio_init(void) "Initialize PipeWire context"
pw_audio_init(void) "Initialize Pipewire context"
# audio.c
audio_timer_start(int interval) "interval %d ms"

View File

@@ -191,11 +191,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
if (algtype == QCRYPTODEV_BACKEND_ALG_ASYM) {
CryptoDevBackendAsymOpInfo *asym_op_info = op_info->u.asym_op_info;
len = asym_op_info->src_len;
if (unlikely(!backend->asym_stat)) {
error_report("cryptodev: Unexpected asym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
CryptodevAsymStatIncEncrypt(backend, len);
@@ -215,11 +210,6 @@ static int cryptodev_backend_account(CryptoDevBackend *backend,
} else if (algtype == QCRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *sym_op_info = op_info->u.sym_op_info;
len = sym_op_info->src_len;
if (unlikely(!backend->sym_stat)) {
error_report("cryptodev: Unexpected sym operation");
return -VIRTIO_CRYPTO_NOTSUPP;
}
switch (op_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
CryptodevSymStatIncEncrypt(backend, len);

View File

@@ -112,8 +112,12 @@ static int tpm_util_request(int fd,
void *response,
size_t responselen)
{
GPollFD fds[1] = { {.fd = fd, .events = G_IO_IN } };
fd_set readfds;
int n;
struct timeval tv = {
.tv_sec = 1,
.tv_usec = 0,
};
n = write(fd, request, requestlen);
if (n < 0) {
@@ -123,8 +127,11 @@ static int tpm_util_request(int fd,
return -EFAULT;
}
FD_ZERO(&readfds);
FD_SET(fd, &readfds);
/* wait for a second */
n = RETRY_ON_EINTR(g_poll(fds, 1, 1000));
n = select(fd + 1, &readfds, NULL, NULL, &tv);
if (n != 1) {
return -errno;
}

114
block.c
View File

@@ -555,9 +555,8 @@ int coroutine_fn bdrv_co_create(BlockDriver *drv, const char *filename,
* On success, return @blk's actual length.
* Otherwise, return -errno.
*/
static int64_t coroutine_fn GRAPH_UNLOCKED
create_file_fallback_truncate(BlockBackend *blk, int64_t minimum_size,
Error **errp)
static int64_t create_file_fallback_truncate(BlockBackend *blk,
int64_t minimum_size, Error **errp)
{
Error *local_err = NULL;
int64_t size;
@@ -565,14 +564,14 @@ create_file_fallback_truncate(BlockBackend *blk, int64_t minimum_size,
GLOBAL_STATE_CODE();
ret = blk_co_truncate(blk, minimum_size, false, PREALLOC_MODE_OFF, 0,
&local_err);
ret = blk_truncate(blk, minimum_size, false, PREALLOC_MODE_OFF, 0,
&local_err);
if (ret < 0 && ret != -ENOTSUP) {
error_propagate(errp, local_err);
return ret;
}
size = blk_co_getlength(blk);
size = blk_getlength(blk);
if (size < 0) {
error_free(local_err);
error_setg_errno(errp, -size,
@@ -2855,7 +2854,7 @@ uint64_t bdrv_qapi_perm_to_blk_perm(BlockPermission qapi_perm)
* Replaces the node that a BdrvChild points to without updating permissions.
*
* If @new_bs is non-NULL, the parent of @child must already be drained through
* @child and the caller must hold the AioContext lock for @new_bs.
* @child.
*/
static void bdrv_replace_child_noperm(BdrvChild *child,
BlockDriverState *new_bs)
@@ -2894,7 +2893,7 @@ static void bdrv_replace_child_noperm(BdrvChild *child,
}
/* TODO Pull this up into the callers to avoid polling here */
bdrv_graph_wrlock(new_bs);
bdrv_graph_wrlock();
if (old_bs) {
if (child->klass->detach) {
child->klass->detach(child);
@@ -2990,10 +2989,6 @@ static TransactionActionDrv bdrv_attach_child_common_drv = {
* Function doesn't update permissions, caller is responsible for this.
*
* Returns new created child.
*
* The caller must hold the AioContext lock for @child_bs. Both @parent_bs and
* @child_bs can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
static BdrvChild *bdrv_attach_child_common(BlockDriverState *child_bs,
const char *child_name,
@@ -3004,7 +2999,7 @@ static BdrvChild *bdrv_attach_child_common(BlockDriverState *child_bs,
Transaction *tran, Error **errp)
{
BdrvChild *new_child;
AioContext *parent_ctx, *new_child_ctx;
AioContext *parent_ctx;
AioContext *child_ctx = bdrv_get_aio_context(child_bs);
assert(child_class->get_parent_desc);
@@ -3055,12 +3050,6 @@ static BdrvChild *bdrv_attach_child_common(BlockDriverState *child_bs,
}
}
new_child_ctx = bdrv_get_aio_context(child_bs);
if (new_child_ctx != child_ctx) {
aio_context_release(child_ctx);
aio_context_acquire(new_child_ctx);
}
bdrv_ref(child_bs);
/*
* Let every new BdrvChild start with a drained parent. Inserting the child
@@ -3090,20 +3079,11 @@ static BdrvChild *bdrv_attach_child_common(BlockDriverState *child_bs,
};
tran_add(tran, &bdrv_attach_child_common_drv, s);
if (new_child_ctx != child_ctx) {
aio_context_release(new_child_ctx);
aio_context_acquire(child_ctx);
}
return new_child;
}
/*
* Function doesn't update permissions, caller is responsible for this.
*
* The caller must hold the AioContext lock for @child_bs. Both @parent_bs and
* @child_bs can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
static BdrvChild *bdrv_attach_child_noperm(BlockDriverState *parent_bs,
BlockDriverState *child_bs,
@@ -3367,10 +3347,6 @@ static BdrvChildRole bdrv_backing_role(BlockDriverState *bs)
* callers which don't need their own reference any more must call bdrv_unref().
*
* Function doesn't update permissions, caller is responsible for this.
*
* The caller must hold the AioContext lock for @child_bs. Both @parent_bs and
* @child_bs can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
static int bdrv_set_file_or_backing_noperm(BlockDriverState *parent_bs,
BlockDriverState *child_bs,
@@ -3459,11 +3435,6 @@ out:
return 0;
}
/*
* The caller must hold the AioContext lock for @backing_hd. Both @bs and
* @backing_hd can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
static int bdrv_set_backing_noperm(BlockDriverState *bs,
BlockDriverState *backing_hd,
Transaction *tran, Error **errp)
@@ -3527,7 +3498,6 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
int ret = 0;
bool implicit_backing = false;
BlockDriverState *backing_hd;
AioContext *backing_hd_ctx;
QDict *options;
QDict *tmp_parent_options = NULL;
Error *local_err = NULL;
@@ -3612,12 +3582,8 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict *parent_options,
/* Hook up the backing file link; drop our reference, bs owns the
* backing_hd reference now */
backing_hd_ctx = bdrv_get_aio_context(backing_hd);
aio_context_acquire(backing_hd_ctx);
ret = bdrv_set_backing_hd(bs, backing_hd, errp);
bdrv_unref(backing_hd);
aio_context_release(backing_hd_ctx);
if (ret < 0) {
goto free_exit;
}
@@ -3688,7 +3654,6 @@ done:
*
* The BlockdevRef will be removed from the options QDict.
*
* The caller must hold the lock of the main AioContext and no other AioContext.
* @parent can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
@@ -3700,8 +3665,6 @@ BdrvChild *bdrv_open_child(const char *filename,
bool allow_none, Error **errp)
{
BlockDriverState *bs;
BdrvChild *child;
AioContext *ctx;
GLOBAL_STATE_CODE();
@@ -3711,19 +3674,13 @@ BdrvChild *bdrv_open_child(const char *filename,
return NULL;
}
ctx = bdrv_get_aio_context(bs);
aio_context_acquire(ctx);
child = bdrv_attach_child(parent, bs, bdref_key, child_class, child_role,
errp);
aio_context_release(ctx);
return child;
return bdrv_attach_child(parent, bs, bdref_key, child_class, child_role,
errp);
}
/*
* Wrapper on bdrv_open_child() for most popular case: open primary child of bs.
*
* The caller must hold the lock of the main AioContext and no other AioContext.
* @parent can move to a different AioContext in this function. Callers must
* make sure that their AioContext locking is still correct after this.
*/
@@ -3800,7 +3757,6 @@ static BlockDriverState *bdrv_append_temp_snapshot(BlockDriverState *bs,
int64_t total_size;
QemuOpts *opts = NULL;
BlockDriverState *bs_snapshot = NULL;
AioContext *ctx = bdrv_get_aio_context(bs);
int ret;
GLOBAL_STATE_CODE();
@@ -3809,10 +3765,7 @@ static BlockDriverState *bdrv_append_temp_snapshot(BlockDriverState *bs,
instead of opening 'filename' directly */
/* Get the required size from the image */
aio_context_acquire(ctx);
total_size = bdrv_getlength(bs);
aio_context_release(ctx);
if (total_size < 0) {
error_setg_errno(errp, -total_size, "Could not get image size");
goto out;
@@ -3846,10 +3799,7 @@ static BlockDriverState *bdrv_append_temp_snapshot(BlockDriverState *bs,
goto out;
}
aio_context_acquire(ctx);
ret = bdrv_append(bs_snapshot, bs, errp);
aio_context_release(ctx);
if (ret < 0) {
bs_snapshot = NULL;
goto out;
@@ -3893,7 +3843,6 @@ bdrv_open_inherit(const char *filename, const char *reference, QDict *options,
Error *local_err = NULL;
QDict *snapshot_options = NULL;
int snapshot_flags = 0;
AioContext *ctx = qemu_get_aio_context();
assert(!child_class || !flags);
assert(!child_class == !parent);
@@ -4031,13 +3980,9 @@ bdrv_open_inherit(const char *filename, const char *reference, QDict *options,
/* Not requesting BLK_PERM_CONSISTENT_READ because we're only
* looking at the header to guess the image format. This works even
* in cases where a guest would not see a consistent state. */
ctx = bdrv_get_aio_context(file_bs);
aio_context_acquire(ctx);
file = blk_new(ctx, 0, BLK_PERM_ALL);
file = blk_new(bdrv_get_aio_context(file_bs), 0, BLK_PERM_ALL);
blk_insert_bs(file, file_bs, &local_err);
bdrv_unref(file_bs);
aio_context_release(ctx);
if (local_err) {
goto fail;
}
@@ -4083,13 +4028,8 @@ bdrv_open_inherit(const char *filename, const char *reference, QDict *options,
goto fail;
}
/* The AioContext could have changed during bdrv_open_common() */
ctx = bdrv_get_aio_context(bs);
if (file) {
aio_context_acquire(ctx);
blk_unref(file);
aio_context_release(ctx);
file = NULL;
}
@@ -4147,16 +4087,13 @@ bdrv_open_inherit(const char *filename, const char *reference, QDict *options,
* (snapshot_bs); thus, we have to drop the strong reference to bs
* (which we obtained by calling bdrv_new()). bs will not be deleted,
* though, because the overlay still has a reference to it. */
aio_context_acquire(ctx);
bdrv_unref(bs);
aio_context_release(ctx);
bs = snapshot_bs;
}
return bs;
fail:
aio_context_acquire(ctx);
blk_unref(file);
qobject_unref(snapshot_options);
qobject_unref(bs->explicit_options);
@@ -4165,14 +4102,11 @@ fail:
bs->options = NULL;
bs->explicit_options = NULL;
bdrv_unref(bs);
aio_context_release(ctx);
error_propagate(errp, local_err);
return NULL;
close_and_fail:
aio_context_acquire(ctx);
bdrv_unref(bs);
aio_context_release(ctx);
qobject_unref(snapshot_options);
qobject_unref(options);
error_propagate(errp, local_err);
@@ -4644,11 +4578,6 @@ int bdrv_reopen_set_read_only(BlockDriverState *bs, bool read_only,
* backing BlockDriverState (or NULL).
*
* Return 0 on success, otherwise return < 0 and set @errp.
*
* The caller must hold the AioContext lock of @reopen_state->bs.
* @reopen_state->bs can move to a different AioContext in this function.
* Callers must make sure that their AioContext locking is still correct after
* this.
*/
static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
bool is_backing, Transaction *tran,
@@ -4661,8 +4590,6 @@ static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
const char *child_name = is_backing ? "backing" : "file";
QObject *value;
const char *str;
AioContext *ctx, *old_ctx;
int ret;
GLOBAL_STATE_CODE();
@@ -4727,22 +4654,8 @@ static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
reopen_state->old_file_bs = old_child_bs;
}
old_ctx = bdrv_get_aio_context(bs);
ctx = bdrv_get_aio_context(new_child_bs);
if (old_ctx != ctx) {
aio_context_release(old_ctx);
aio_context_acquire(ctx);
}
ret = bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
tran, errp);
if (old_ctx != ctx) {
aio_context_release(ctx);
aio_context_acquire(old_ctx);
}
return ret;
return bdrv_set_file_or_backing_noperm(bs, new_child_bs, is_backing,
tran, errp);
}
/*
@@ -4761,7 +4674,6 @@ static int bdrv_reopen_parse_file_or_backing(BDRVReopenState *reopen_state,
* It is the responsibility of the caller to then call the abort() or
* commit() for any other BDS that have been left in a prepare() state
*
* The caller must hold the AioContext lock of @reopen_state->bs.
*/
static int bdrv_reopen_prepare(BDRVReopenState *reopen_state,
BlockReopenQueue *queue,

View File

@@ -22,6 +22,16 @@
#include "block/block-io.h"
/*
* Keep the QEMU BlockDriver names identical to the libblkio driver names.
* Using macros instead of typing out the string literals avoids typos.
*/
#define DRIVER_IO_URING "io_uring"
#define DRIVER_NVME_IO_URING "nvme-io_uring"
#define DRIVER_VIRTIO_BLK_VFIO_PCI "virtio-blk-vfio-pci"
#define DRIVER_VIRTIO_BLK_VHOST_USER "virtio-blk-vhost-user"
#define DRIVER_VIRTIO_BLK_VHOST_VDPA "virtio-blk-vhost-vdpa"
/*
* Allocated bounce buffers are kept in a list sorted by buffer address.
*/
@@ -603,8 +613,8 @@ static void blkio_unregister_buf(BlockDriverState *bs, void *host, size_t size)
}
}
static int blkio_io_uring_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_io_uring_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
const char *filename = qdict_get_str(options, "filename");
BDRVBlkioState *s = bs->opaque;
@@ -627,18 +637,11 @@ static int blkio_io_uring_connect(BlockDriverState *bs, QDict *options,
}
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
return 0;
}
static int blkio_nvme_io_uring_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_nvme_io_uring(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
const char *path = qdict_get_try_str(options, "path");
BDRVBlkioState *s = bs->opaque;
@@ -662,23 +665,16 @@ static int blkio_nvme_io_uring_connect(BlockDriverState *bs, QDict *options,
return -EINVAL;
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
return 0;
}
static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
int flags, Error **errp)
static int blkio_virtio_blk_common_open(BlockDriverState *bs,
QDict *options, int flags, Error **errp)
{
const char *path = qdict_get_try_str(options, "path");
BDRVBlkioState *s = bs->opaque;
bool fd_supported = false;
int fd = -1, ret;
int fd, ret;
if (!path) {
error_setg(errp, "missing 'path' option");
@@ -690,7 +686,7 @@ static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
return -EINVAL;
}
if (blkio_set_int(s->blkio, "fd", -1) == 0) {
if (blkio_get_int(s->blkio, "fd", &fd) == 0) {
fd_supported = true;
}
@@ -700,86 +696,33 @@ static int blkio_virtio_blk_connect(BlockDriverState *bs, QDict *options,
* layer through the "/dev/fdset/N" special path.
*/
if (fd_supported) {
/*
* `path` can contain the path of a character device
* (e.g. /dev/vhost-vdpa-0 or /dev/vfio/vfio) or a unix socket.
*
* So, we should always open it with O_RDWR flag, also if BDRV_O_RDWR
* is not set in the open flags, because the exchange of IOCTL commands
* for example will fail.
*
* In order to open the device read-only, we are using the `read-only`
* property of the libblkio driver in blkio_file_open().
*/
fd = qemu_open(path, O_RDWR, NULL);
if (fd < 0) {
/*
* qemu_open() can fail if the user specifies a path that is not
* a file or device, for example in the case of Unix Domain Socket
* for the virtio-blk-vhost-user driver. In such cases let's have
* libblkio open the path directly.
*/
fd_supported = false;
int open_flags;
if (flags & BDRV_O_RDWR) {
open_flags = O_RDWR;
} else {
ret = blkio_set_int(s->blkio, "fd", fd);
if (ret < 0) {
fd_supported = false;
qemu_close(fd);
fd = -1;
}
open_flags = O_RDONLY;
}
}
if (!fd_supported) {
ret = blkio_set_str(s->blkio, "path", path);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set path: %s",
blkio_get_error_msg());
return ret;
fd = qemu_open(path, open_flags, errp);
if (fd < 0) {
return -EINVAL;
}
}
ret = blkio_connect(s->blkio);
if (ret < 0 && fd >= 0) {
/* Failed to give the FD to libblkio, close it */
qemu_close(fd);
fd = -1;
}
/*
* Before https://gitlab.com/libblkio/libblkio/-/merge_requests/208
* (libblkio <= v1.3.0), setting the `fd` property is not enough to check
* whether the driver supports the `fd` property or not. In that case,
* blkio_connect() will fail with -EINVAL.
* So let's try calling blkio_connect() again by directly setting `path`
* to cover this scenario.
*/
if (fd_supported && ret == -EINVAL) {
/*
* We need to clear the `fd` property we set previously by setting
* it to -1.
*/
ret = blkio_set_int(s->blkio, "fd", -1);
ret = blkio_set_int(s->blkio, "fd", fd);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set fd: %s",
blkio_get_error_msg());
qemu_close(fd);
return ret;
}
} else {
ret = blkio_set_str(s->blkio, "path", path);
if (ret < 0) {
error_setg_errno(errp, -ret, "failed to set path: %s",
blkio_get_error_msg());
return ret;
}
ret = blkio_connect(s->blkio);
}
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
return ret;
}
qdict_del(options, "path");
@@ -801,6 +744,24 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
if (strcmp(blkio_driver, DRIVER_IO_URING) == 0) {
ret = blkio_io_uring_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_NVME_IO_URING) == 0) {
ret = blkio_nvme_io_uring(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VFIO_PCI) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VHOST_USER) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else if (strcmp(blkio_driver, DRIVER_VIRTIO_BLK_VHOST_VDPA) == 0) {
ret = blkio_virtio_blk_common_open(bs, options, flags, errp);
} else {
g_assert_not_reached();
}
if (ret < 0) {
blkio_destroy(&s->blkio);
return ret;
}
if (!(flags & BDRV_O_RDWR)) {
ret = blkio_set_bool(s->blkio, "read-only", true);
if (ret < 0) {
@@ -811,20 +772,10 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
}
}
if (strcmp(blkio_driver, "io_uring") == 0) {
ret = blkio_io_uring_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "nvme-io_uring") == 0) {
ret = blkio_nvme_io_uring_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vfio-pci") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vhost-user") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else if (strcmp(blkio_driver, "virtio-blk-vhost-vdpa") == 0) {
ret = blkio_virtio_blk_connect(bs, options, flags, errp);
} else {
g_assert_not_reached();
}
ret = blkio_connect(s->blkio);
if (ret < 0) {
error_setg_errno(errp, -ret, "blkio_connect failed: %s",
blkio_get_error_msg());
blkio_destroy(&s->blkio);
return ret;
}
@@ -904,7 +855,6 @@ static int blkio_file_open(BlockDriverState *bs, QDict *options, int flags,
QLIST_INIT(&s->bounce_bufs);
s->blkioq = blkio_get_queue(s->blkio, 0);
s->completion_fd = blkioq_get_completion_fd(s->blkioq);
blkioq_set_completion_fd_enabled(s->blkioq, true);
blkio_attach_aio_context(bs, bdrv_get_aio_context(bs));
return 0;
@@ -1078,63 +1028,49 @@ static void blkio_refresh_limits(BlockDriverState *bs, Error **errp)
* - truncate
*/
/*
* Do not include .format_name and .protocol_name because module_block.py
* does not parse macros in the source code.
*/
#define BLKIO_DRIVER_COMMON \
.instance_size = sizeof(BDRVBlkioState), \
.bdrv_file_open = blkio_file_open, \
.bdrv_close = blkio_close, \
.bdrv_co_getlength = blkio_co_getlength, \
.bdrv_co_truncate = blkio_truncate, \
.bdrv_co_get_info = blkio_co_get_info, \
.bdrv_attach_aio_context = blkio_attach_aio_context, \
.bdrv_detach_aio_context = blkio_detach_aio_context, \
.bdrv_co_pdiscard = blkio_co_pdiscard, \
.bdrv_co_preadv = blkio_co_preadv, \
.bdrv_co_pwritev = blkio_co_pwritev, \
.bdrv_co_flush_to_disk = blkio_co_flush, \
.bdrv_co_pwrite_zeroes = blkio_co_pwrite_zeroes, \
.bdrv_refresh_limits = blkio_refresh_limits, \
.bdrv_register_buf = blkio_register_buf, \
.bdrv_unregister_buf = blkio_unregister_buf,
#define BLKIO_DRIVER(name, ...) \
{ \
.format_name = name, \
.protocol_name = name, \
.instance_size = sizeof(BDRVBlkioState), \
.bdrv_file_open = blkio_file_open, \
.bdrv_close = blkio_close, \
.bdrv_co_getlength = blkio_co_getlength, \
.bdrv_co_truncate = blkio_truncate, \
.bdrv_co_get_info = blkio_co_get_info, \
.bdrv_attach_aio_context = blkio_attach_aio_context, \
.bdrv_detach_aio_context = blkio_detach_aio_context, \
.bdrv_co_pdiscard = blkio_co_pdiscard, \
.bdrv_co_preadv = blkio_co_preadv, \
.bdrv_co_pwritev = blkio_co_pwritev, \
.bdrv_co_flush_to_disk = blkio_co_flush, \
.bdrv_co_pwrite_zeroes = blkio_co_pwrite_zeroes, \
.bdrv_refresh_limits = blkio_refresh_limits, \
.bdrv_register_buf = blkio_register_buf, \
.bdrv_unregister_buf = blkio_unregister_buf, \
__VA_ARGS__ \
}
/*
* Use the same .format_name and .protocol_name as the libblkio driver name for
* consistency.
*/
static BlockDriver bdrv_io_uring = {
.format_name = "io_uring",
.protocol_name = "io_uring",
static BlockDriver bdrv_io_uring = BLKIO_DRIVER(
DRIVER_IO_URING,
.bdrv_needs_filename = true,
BLKIO_DRIVER_COMMON
};
);
static BlockDriver bdrv_nvme_io_uring = {
.format_name = "nvme-io_uring",
.protocol_name = "nvme-io_uring",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_nvme_io_uring = BLKIO_DRIVER(
DRIVER_NVME_IO_URING,
);
static BlockDriver bdrv_virtio_blk_vfio_pci = {
.format_name = "virtio-blk-vfio-pci",
.protocol_name = "virtio-blk-vfio-pci",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vfio_pci = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VFIO_PCI
);
static BlockDriver bdrv_virtio_blk_vhost_user = {
.format_name = "virtio-blk-vhost-user",
.protocol_name = "virtio-blk-vhost-user",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vhost_user = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VHOST_USER
);
static BlockDriver bdrv_virtio_blk_vhost_vdpa = {
.format_name = "virtio-blk-vhost-vdpa",
.protocol_name = "virtio-blk-vhost-vdpa",
BLKIO_DRIVER_COMMON
};
static BlockDriver bdrv_virtio_blk_vhost_vdpa = BLKIO_DRIVER(
DRIVER_VIRTIO_BLK_VHOST_VDPA
);
static void bdrv_blkio_init(void)
{

View File

@@ -203,8 +203,7 @@ static void bochs_refresh_limits(BlockDriverState *bs, Error **errp)
bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */
}
static int64_t coroutine_fn GRAPH_RDLOCK
seek_to_sector(BlockDriverState *bs, int64_t sector_num)
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVBochsState *s = bs->opaque;
uint64_t offset = sector_num * 512;
@@ -225,8 +224,8 @@ seek_to_sector(BlockDriverState *bs, int64_t sector_num)
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_co_pread(bs->file, bitmap_offset + (extent_offset / 8), 1,
&bitmap_entry, 0);
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8), 1,
&bitmap_entry, 0);
if (ret < 0) {
return ret;
}

View File

@@ -212,8 +212,7 @@ static void cloop_refresh_limits(BlockDriverState *bs, Error **errp)
bs->bl.request_alignment = BDRV_SECTOR_SIZE; /* No sub-sector I/O */
}
static int coroutine_fn GRAPH_RDLOCK
cloop_read_block(BlockDriverState *bs, int block_num)
static inline int cloop_read_block(BlockDriverState *bs, int block_num)
{
BDRVCloopState *s = bs->opaque;
@@ -221,8 +220,8 @@ cloop_read_block(BlockDriverState *bs, int block_num)
int ret;
uint32_t bytes = s->offsets[block_num + 1] - s->offsets[block_num];
ret = bdrv_co_pread(bs->file, s->offsets[block_num], bytes,
s->compressed_block, 0);
ret = bdrv_pread(bs->file, s->offsets[block_num], bytes,
s->compressed_block, 0);
if (ret < 0) {
return -1;
}
@@ -245,7 +244,7 @@ cloop_read_block(BlockDriverState *bs, int block_num)
return 0;
}
static int coroutine_fn GRAPH_RDLOCK
static int coroutine_fn
cloop_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags)
{

View File

@@ -616,8 +616,7 @@ err:
return s->n_chunks; /* error */
}
static int coroutine_fn GRAPH_RDLOCK
dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
{
BDRVDMGState *s = bs->opaque;
@@ -634,8 +633,8 @@ dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
case UDZO: { /* zlib compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_co_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
ret = bdrv_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
if (ret < 0) {
return -1;
}
@@ -660,8 +659,8 @@ dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
}
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_co_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
ret = bdrv_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
if (ret < 0) {
return -1;
}
@@ -681,8 +680,8 @@ dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
}
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_co_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
ret = bdrv_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->compressed_chunk, 0);
if (ret < 0) {
return -1;
}
@@ -697,8 +696,8 @@ dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
}
break;
case UDRW: /* copy */
ret = bdrv_co_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->uncompressed_chunk, 0);
ret = bdrv_pread(bs->file, s->offsets[chunk], s->lengths[chunk],
s->uncompressed_chunk, 0);
if (ret < 0) {
return -1;
}
@@ -714,7 +713,7 @@ dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
return 0;
}
static int coroutine_fn GRAPH_RDLOCK
static int coroutine_fn
dmg_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
QEMUIOVector *qiov, BdrvRequestFlags flags)
{

View File

@@ -193,7 +193,7 @@ static int fd_open(BlockDriverState *bs)
return -EIO;
}
static int64_t raw_getlength(BlockDriverState *bs);
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs);
typedef struct RawPosixAIOData {
BlockDriverState *bs;
@@ -1232,6 +1232,7 @@ static int hdev_get_max_hw_transfer(int fd, struct stat *st)
static int get_sysfs_str_val(struct stat *st, const char *attribute,
char **val) {
g_autofree char *sysfspath = NULL;
int ret;
size_t len;
if (!S_ISBLK(st->st_mode)) {
@@ -1241,7 +1242,8 @@ static int get_sysfs_str_val(struct stat *st, const char *attribute,
sysfspath = g_strdup_printf("/sys/dev/block/%u:%u/queue/%s",
major(st->st_rdev), minor(st->st_rdev),
attribute);
if (!g_file_get_contents(sysfspath, val, &len, NULL)) {
ret = g_file_get_contents(sysfspath, val, &len, NULL);
if (ret == -1) {
return -ENOENT;
}
@@ -1251,7 +1253,7 @@ static int get_sysfs_str_val(struct stat *st, const char *attribute,
if (*(p + len - 1) == '\n') {
*(p + len - 1) = '\0';
}
return 0;
return ret;
}
#endif
@@ -1412,9 +1414,11 @@ static void raw_refresh_zoned_limits(BlockDriverState *bs, struct stat *st,
BlockZoneModel zoned;
int ret;
bs->bl.zoned = BLK_Z_NONE;
ret = get_sysfs_zoned_model(st, &zoned);
if (ret < 0 || zoned == BLK_Z_NONE) {
goto no_zoned;
return;
}
bs->bl.zoned = zoned;
@@ -1435,10 +1439,10 @@ static void raw_refresh_zoned_limits(BlockDriverState *bs, struct stat *st,
if (ret < 0) {
error_setg_errno(errp, -ret, "Unable to read chunk_sectors "
"sysfs attribute");
goto no_zoned;
return;
} else if (!ret) {
error_setg(errp, "Read 0 from chunk_sectors sysfs attribute");
goto no_zoned;
return;
}
bs->bl.zone_size = ret << BDRV_SECTOR_BITS;
@@ -1446,10 +1450,10 @@ static void raw_refresh_zoned_limits(BlockDriverState *bs, struct stat *st,
if (ret < 0) {
error_setg_errno(errp, -ret, "Unable to read nr_zones "
"sysfs attribute");
goto no_zoned;
return;
} else if (!ret) {
error_setg(errp, "Read 0 from nr_zones sysfs attribute");
goto no_zoned;
return;
}
bs->bl.nr_zones = ret;
@@ -1470,15 +1474,10 @@ static void raw_refresh_zoned_limits(BlockDriverState *bs, struct stat *st,
ret = get_zones_wp(bs, s->fd, 0, bs->bl.nr_zones, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "report wps failed");
goto no_zoned;
bs->wps = NULL;
return;
}
qemu_co_mutex_init(&bs->wps->colock);
return;
no_zoned:
bs->bl.zoned = BLK_Z_NONE;
g_free(bs->wps);
bs->wps = NULL;
}
#else /* !defined(CONFIG_BLKZONED) */
static void raw_refresh_zoned_limits(BlockDriverState *bs, struct stat *st,
@@ -1975,7 +1974,7 @@ static int handle_aiocb_write_zeroes(void *opaque)
#ifdef CONFIG_FALLOCATE
/* Last resort: we are trying to extend the file with zeroed data. This
* can be done via fallocate(fd, 0) */
len = raw_getlength(aiocb->bs);
len = raw_co_getlength(aiocb->bs);
if (s->has_fallocate && len >= 0 && aiocb->aio_offset >= len) {
int ret = do_fallocate(s->fd, 0, aiocb->aio_offset, aiocb->aio_nbytes);
if (ret == 0 || ret != -ENOTSUP) {
@@ -2455,10 +2454,9 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
if (fd_open(bs) < 0)
return -EIO;
#if defined(CONFIG_BLKZONED)
if ((type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND)) &&
bs->bl.zoned != BLK_Z_NONE) {
if ((type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND)) && bs->wps) {
qemu_co_mutex_lock(&bs->wps->colock);
if (type & QEMU_AIO_ZONE_APPEND) {
if (type & QEMU_AIO_ZONE_APPEND && bs->bl.zone_size) {
int index = offset / bs->bl.zone_size;
offset = bs->wps->wp[index];
}
@@ -2506,10 +2504,11 @@ static int coroutine_fn raw_co_prw(BlockDriverState *bs, uint64_t offset,
out:
#if defined(CONFIG_BLKZONED)
if ((type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND)) &&
bs->bl.zoned != BLK_Z_NONE) {
BlockZoneWps *wps = bs->wps;
if (ret == 0) {
{
BlockZoneWps *wps = bs->wps;
if (ret == 0) {
if ((type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND))
&& wps && bs->bl.zone_size) {
uint64_t *wp = &wps->wp[offset / bs->bl.zone_size];
if (!BDRV_ZT_IS_CONV(*wp)) {
if (type & QEMU_AIO_ZONE_APPEND) {
@@ -2522,12 +2521,17 @@ out:
*wp = offset + bytes;
}
}
} else {
}
} else {
if (type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND)) {
update_zones_wp(bs, s->fd, 0, 1);
}
}
if ((type & (QEMU_AIO_WRITE | QEMU_AIO_ZONE_APPEND)) && wps) {
qemu_co_mutex_unlock(&wps->colock);
}
}
#endif
return ret;
}
@@ -2662,7 +2666,7 @@ static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offset,
}
if (S_ISCHR(st.st_mode) || S_ISBLK(st.st_mode)) {
int64_t cur_length = raw_getlength(bs);
int64_t cur_length = raw_co_getlength(bs);
if (offset != cur_length && exact) {
error_setg(errp, "Cannot resize device files");
@@ -2680,7 +2684,7 @@ static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offset,
}
#ifdef __OpenBSD__
static int64_t raw_getlength(BlockDriverState *bs)
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
int fd = s->fd;
@@ -2699,7 +2703,7 @@ static int64_t raw_getlength(BlockDriverState *bs)
return st.st_size;
}
#elif defined(__NetBSD__)
static int64_t raw_getlength(BlockDriverState *bs)
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
int fd = s->fd;
@@ -2724,7 +2728,7 @@ static int64_t raw_getlength(BlockDriverState *bs)
return st.st_size;
}
#elif defined(__sun__)
static int64_t raw_getlength(BlockDriverState *bs)
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
struct dk_minfo minfo;
@@ -2755,7 +2759,7 @@ static int64_t raw_getlength(BlockDriverState *bs)
return size;
}
#elif defined(CONFIG_BSD)
static int64_t raw_getlength(BlockDriverState *bs)
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
int fd = s->fd;
@@ -2827,7 +2831,7 @@ again:
return size;
}
#else
static int64_t raw_getlength(BlockDriverState *bs)
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
BDRVRawState *s = bs->opaque;
int ret;
@@ -2846,11 +2850,6 @@ static int64_t raw_getlength(BlockDriverState *bs)
}
#endif
static int64_t coroutine_fn raw_co_getlength(BlockDriverState *bs)
{
return raw_getlength(bs);
}
static int64_t coroutine_fn raw_co_get_allocated_file_size(BlockDriverState *bs)
{
struct stat st;
@@ -3216,7 +3215,7 @@ static int coroutine_fn raw_co_block_status(BlockDriverState *bs,
* round up if necessary.
*/
if (!QEMU_IS_ALIGNED(*pnum, bs->bl.request_alignment)) {
int64_t file_length = raw_getlength(bs);
int64_t file_length = raw_co_getlength(bs);
if (file_length > 0) {
/* Ignore errors, this is just a safeguard */
assert(hole == file_length);
@@ -3238,7 +3237,7 @@ static int coroutine_fn raw_co_block_status(BlockDriverState *bs,
#if defined(__linux__)
/* Verify that the file is not in the page cache */
static void check_cache_dropped(BlockDriverState *bs, Error **errp)
static void coroutine_fn check_cache_dropped(BlockDriverState *bs, Error **errp)
{
const size_t window_size = 128 * 1024 * 1024;
BDRVRawState *s = bs->opaque;
@@ -3253,7 +3252,7 @@ static void check_cache_dropped(BlockDriverState *bs, Error **errp)
page_size = sysconf(_SC_PAGESIZE);
vec = g_malloc(DIV_ROUND_UP(window_size, page_size));
end = raw_getlength(bs);
end = raw_co_getlength(bs);
for (offset = 0; offset < end; offset += window_size) {
void *new_window;
@@ -4469,7 +4468,7 @@ static int cdrom_reopen(BlockDriverState *bs)
static bool coroutine_fn cdrom_co_is_inserted(BlockDriverState *bs)
{
return raw_getlength(bs) > 0;
return raw_co_getlength(bs) > 0;
}
static void coroutine_fn cdrom_co_eject(BlockDriverState *bs, bool eject_flag)

View File

@@ -30,8 +30,10 @@ BdrvGraphLock graph_lock;
/* Protects the list of aiocontext and orphaned_reader_count */
static QemuMutex aio_context_list_lock;
#if 0
/* Written and read with atomic operations. */
static int has_writer;
#endif
/*
* A reader coroutine could move from an AioContext to another.
@@ -88,6 +90,7 @@ void unregister_aiocontext(AioContext *ctx)
g_free(ctx->bdrv_graph);
}
#if 0
static uint32_t reader_count(void)
{
BdrvGraphRWlock *brdv_graph;
@@ -105,26 +108,18 @@ static uint32_t reader_count(void)
assert((int32_t)rd >= 0);
return rd;
}
#endif
void bdrv_graph_wrlock(BlockDriverState *bs)
void bdrv_graph_wrlock(void)
{
AioContext *ctx = NULL;
GLOBAL_STATE_CODE();
assert(!qatomic_read(&has_writer));
/*
* Release only non-mainloop AioContext. The mainloop often relies on the
* BQL and doesn't lock the main AioContext before doing things.
* TODO Some callers hold an AioContext lock when this is called, which
* causes deadlocks. Reenable once the AioContext locking is cleaned up (or
* AioContext locks are gone).
*/
if (bs) {
ctx = bdrv_get_aio_context(bs);
if (ctx != qemu_get_aio_context()) {
aio_context_release(ctx);
} else {
ctx = NULL;
}
}
#if 0
assert(!qatomic_read(&has_writer));
/* Make sure that constantly arriving new I/O doesn't cause starvation */
bdrv_drain_all_begin_nopoll();
@@ -154,15 +149,13 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
} while (reader_count() >= 1);
bdrv_drain_all_end();
if (ctx) {
aio_context_acquire(bdrv_get_aio_context(bs));
}
#endif
}
void bdrv_graph_wrunlock(void)
{
GLOBAL_STATE_CODE();
#if 0
QEMU_LOCK_GUARD(&aio_context_list_lock);
assert(qatomic_read(&has_writer));
@@ -174,10 +167,13 @@ void bdrv_graph_wrunlock(void)
/* Wake up all coroutine that are waiting to read the graph */
qemu_co_enter_all(&reader_queue, &aio_context_list_lock);
#endif
}
void coroutine_fn bdrv_graph_co_rdlock(void)
{
/* TODO Reenable when wrlock is reenabled */
#if 0
BdrvGraphRWlock *bdrv_graph;
bdrv_graph = qemu_get_current_aio_context()->bdrv_graph;
@@ -237,10 +233,12 @@ void coroutine_fn bdrv_graph_co_rdlock(void)
qemu_co_queue_wait(&reader_queue, &aio_context_list_lock);
}
}
#endif
}
void coroutine_fn bdrv_graph_co_rdunlock(void)
{
#if 0
BdrvGraphRWlock *bdrv_graph;
bdrv_graph = qemu_get_current_aio_context()->bdrv_graph;
@@ -258,6 +256,7 @@ void coroutine_fn bdrv_graph_co_rdunlock(void)
if (qatomic_read(&has_writer)) {
aio_wait_kick();
}
#endif
}
void bdrv_graph_rdlock_main_loop(void)
@@ -275,13 +274,19 @@ void bdrv_graph_rdunlock_main_loop(void)
void assert_bdrv_graph_readable(void)
{
/* reader_count() is slow due to aio_context_list_lock lock contention */
/* TODO Reenable when wrlock is reenabled */
#if 0
#ifdef CONFIG_DEBUG_GRAPH_LOCK
assert(qemu_in_main_thread() || reader_count());
#endif
#endif
}
void assert_bdrv_graph_writable(void)
{
assert(qemu_in_main_thread());
/* TODO Reenable when wrlock is reenabled */
#if 0
assert(qatomic_read(&has_writer));
#endif
}

View File

@@ -1379,7 +1379,7 @@ bdrv_aligned_preadv(BdrvChild *child, BdrvTrackedRequest *req,
}
/* Forward the request to the BlockDriver, possibly fragmenting it */
total_bytes = bdrv_co_getlength(bs);
total_bytes = bdrv_getlength(bs);
if (total_bytes < 0) {
ret = total_bytes;
goto out;
@@ -1710,11 +1710,7 @@ static int bdrv_pad_request(BlockDriverState *bs,
int sliced_niov;
size_t sliced_head, sliced_tail;
/* Should have been checked by the caller already */
ret = bdrv_check_request32(*offset, *bytes, *qiov, *qiov_offset);
if (ret < 0) {
return ret;
}
bdrv_check_qiov_request(*offset, *bytes, *qiov, *qiov_offset, &error_abort);
if (!bdrv_init_padding(bs, *offset, *bytes, write, pad)) {
if (padded) {
@@ -1727,7 +1723,7 @@ static int bdrv_pad_request(BlockDriverState *bs,
&sliced_head, &sliced_tail,
&sliced_niov);
/* Guaranteed by bdrv_check_request32() */
/* Guaranteed by bdrv_check_qiov_request() */
assert(*bytes <= SIZE_MAX);
ret = bdrv_create_padded_qiov(bs, pad, sliced_iov, sliced_niov,
sliced_head, *bytes);
@@ -2392,7 +2388,7 @@ bdrv_co_block_status(BlockDriverState *bs, bool want_zero,
assert(pnum);
assert_bdrv_graph_readable();
*pnum = 0;
total_size = bdrv_co_getlength(bs);
total_size = bdrv_getlength(bs);
if (total_size < 0) {
ret = total_size;
goto early_out;
@@ -2412,7 +2408,7 @@ bdrv_co_block_status(BlockDriverState *bs, bool want_zero,
bytes = n;
}
/* Must be non-NULL or bdrv_co_getlength() would have failed */
/* Must be non-NULL or bdrv_getlength() would have failed */
assert(bs->drv);
has_filtered_child = bdrv_filter_child(bs);
if (!bs->drv->bdrv_co_block_status && !has_filtered_child) {
@@ -2550,7 +2546,7 @@ bdrv_co_block_status(BlockDriverState *bs, bool want_zero,
if (!cow_bs) {
ret |= BDRV_BLOCK_ZERO;
} else if (want_zero) {
int64_t size2 = bdrv_co_getlength(cow_bs);
int64_t size2 = bdrv_getlength(cow_bs);
if (size2 >= 0 && offset >= size2) {
ret |= BDRV_BLOCK_ZERO;
@@ -3015,7 +3011,7 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
}
/* Write back cached data to the OS even with cache=unsafe */
BLKDBG_CO_EVENT(primary_child, BLKDBG_FLUSH_TO_OS);
BLKDBG_EVENT(primary_child, BLKDBG_FLUSH_TO_OS);
if (bs->drv->bdrv_co_flush_to_os) {
ret = bs->drv->bdrv_co_flush_to_os(bs);
if (ret < 0) {
@@ -3033,7 +3029,7 @@ int coroutine_fn bdrv_co_flush(BlockDriverState *bs)
goto flush_children;
}
BLKDBG_CO_EVENT(primary_child, BLKDBG_FLUSH_TO_DISK);
BLKDBG_EVENT(primary_child, BLKDBG_FLUSH_TO_DISK);
if (!bs->drv) {
/* bs->drv->bdrv_co_flush() might have ejected the BDS
* (even in case of apparent success) */
@@ -3596,7 +3592,7 @@ int coroutine_fn bdrv_co_truncate(BdrvChild *child, int64_t offset, bool exact,
return ret;
}
old_size = bdrv_co_getlength(bs);
old_size = bdrv_getlength(bs);
if (old_size < 0) {
error_setg_errno(errp, -old_size, "Failed to get old image size");
return old_size;

View File

@@ -1,8 +1,8 @@
/*
* QEMU Block driver for NBD
* QEMU Block driver for NBD
*
* Copyright (c) 2019 Virtuozzo International GmbH.
* Copyright Red Hat
* Copyright (C) 2016 Red Hat, Inc.
* Copyright (C) 2008 Bull S.A.S.
* Author: Laurent Vivier <Laurent.Vivier@bull.net>
*
@@ -50,8 +50,8 @@
#define EN_OPTSTR ":exportname="
#define MAX_NBD_REQUESTS 16
#define COOKIE_TO_INDEX(cookie) ((cookie) - 1)
#define INDEX_TO_COOKIE(index) ((index) + 1)
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ (uint64_t)(intptr_t)(bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ (uint64_t)(intptr_t)(bs))
typedef struct {
Coroutine *coroutine;
@@ -417,25 +417,25 @@ static void coroutine_fn GRAPH_RDLOCK nbd_reconnect_attempt(BDRVNBDState *s)
reconnect_delay_timer_del(s);
}
static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t handle)
{
int ret;
uint64_t ind = COOKIE_TO_INDEX(cookie), ind2;
uint64_t ind = HANDLE_TO_INDEX(s, handle), ind2;
QEMU_LOCK_GUARD(&s->receive_mutex);
while (true) {
if (s->reply.cookie == cookie) {
if (s->reply.handle == handle) {
/* We are done */
return 0;
}
if (s->reply.cookie != 0) {
if (s->reply.handle != 0) {
/*
* Some other request is being handled now. It should already be
* woken by whoever set s->reply.cookie (or never wait in this
* woken by whoever set s->reply.handle (or never wait in this
* yield). So, we should not wake it here.
*/
ind2 = COOKIE_TO_INDEX(s->reply.cookie);
ind2 = HANDLE_TO_INDEX(s, s->reply.handle);
assert(!s->requests[ind2].receiving);
s->requests[ind].receiving = true;
@@ -445,9 +445,9 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
/*
* We may be woken for 2 reasons:
* 1. From this function, executing in parallel coroutine, when our
* cookie is received.
* handle is received.
* 2. From nbd_co_receive_one_chunk(), when previous request is
* finished and s->reply.cookie set to 0.
* finished and s->reply.handle set to 0.
* Anyway, it's OK to lock the mutex and go to the next iteration.
*/
@@ -456,8 +456,8 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
continue;
}
/* We are under mutex and cookie is 0. We have to do the dirty work. */
assert(s->reply.cookie == 0);
/* We are under mutex and handle is 0. We have to do the dirty work. */
assert(s->reply.handle == 0);
ret = nbd_receive_reply(s->bs, s->ioc, &s->reply, NULL);
if (ret <= 0) {
ret = ret ? ret : -EIO;
@@ -468,12 +468,12 @@ static coroutine_fn int nbd_receive_replies(BDRVNBDState *s, uint64_t cookie)
nbd_channel_error(s, -EINVAL);
return -EINVAL;
}
ind2 = COOKIE_TO_INDEX(s->reply.cookie);
ind2 = HANDLE_TO_INDEX(s, s->reply.handle);
if (ind2 >= MAX_NBD_REQUESTS || !s->requests[ind2].coroutine) {
nbd_channel_error(s, -EINVAL);
return -EINVAL;
}
if (s->reply.cookie == cookie) {
if (s->reply.handle == handle) {
/* We are done */
return 0;
}
@@ -519,7 +519,7 @@ nbd_co_send_request(BlockDriverState *bs, NBDRequest *request,
qemu_mutex_unlock(&s->requests_lock);
qemu_co_mutex_lock(&s->send_mutex);
request->cookie = INDEX_TO_COOKIE(i);
request->handle = INDEX_TO_HANDLE(s, i);
assert(s->ioc);
@@ -828,11 +828,11 @@ static coroutine_fn int nbd_co_receive_structured_payload(
* corresponding to the server's error reply), and errp is unchanged.
*/
static coroutine_fn int nbd_co_do_receive_one_chunk(
BDRVNBDState *s, uint64_t cookie, bool only_structured,
BDRVNBDState *s, uint64_t handle, bool only_structured,
int *request_ret, QEMUIOVector *qiov, void **payload, Error **errp)
{
int ret;
int i = COOKIE_TO_INDEX(cookie);
int i = HANDLE_TO_INDEX(s, handle);
void *local_payload = NULL;
NBDStructuredReplyChunk *chunk;
@@ -841,14 +841,14 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
}
*request_ret = 0;
ret = nbd_receive_replies(s, cookie);
ret = nbd_receive_replies(s, handle);
if (ret < 0) {
error_setg(errp, "Connection closed");
return -EIO;
}
assert(s->ioc);
assert(s->reply.cookie == cookie);
assert(s->reply.handle == handle);
if (nbd_reply_is_simple(&s->reply)) {
if (only_structured) {
@@ -918,11 +918,11 @@ static coroutine_fn int nbd_co_do_receive_one_chunk(
* Return value is a fatal error code or normal nbd reply error code
*/
static coroutine_fn int nbd_co_receive_one_chunk(
BDRVNBDState *s, uint64_t cookie, bool only_structured,
BDRVNBDState *s, uint64_t handle, bool only_structured,
int *request_ret, QEMUIOVector *qiov, NBDReply *reply, void **payload,
Error **errp)
{
int ret = nbd_co_do_receive_one_chunk(s, cookie, only_structured,
int ret = nbd_co_do_receive_one_chunk(s, handle, only_structured,
request_ret, qiov, payload, errp);
if (ret < 0) {
@@ -932,7 +932,7 @@ static coroutine_fn int nbd_co_receive_one_chunk(
/* For assert at loop start in nbd_connection_entry */
*reply = s->reply;
}
s->reply.cookie = 0;
s->reply.handle = 0;
nbd_recv_coroutines_wake(s);
@@ -975,10 +975,10 @@ static void nbd_iter_request_error(NBDReplyChunkIter *iter, int ret)
* NBD_FOREACH_REPLY_CHUNK
* The pointer stored in @payload requires g_free() to free it.
*/
#define NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, structured, \
#define NBD_FOREACH_REPLY_CHUNK(s, iter, handle, structured, \
qiov, reply, payload) \
for (iter = (NBDReplyChunkIter) { .only_structured = structured }; \
nbd_reply_chunk_iter_receive(s, &iter, cookie, qiov, reply, payload);)
nbd_reply_chunk_iter_receive(s, &iter, handle, qiov, reply, payload);)
/*
* nbd_reply_chunk_iter_receive
@@ -986,7 +986,7 @@ static void nbd_iter_request_error(NBDReplyChunkIter *iter, int ret)
*/
static bool coroutine_fn nbd_reply_chunk_iter_receive(BDRVNBDState *s,
NBDReplyChunkIter *iter,
uint64_t cookie,
uint64_t handle,
QEMUIOVector *qiov,
NBDReply *reply,
void **payload)
@@ -1005,7 +1005,7 @@ static bool coroutine_fn nbd_reply_chunk_iter_receive(BDRVNBDState *s,
reply = &local_reply;
}
ret = nbd_co_receive_one_chunk(s, cookie, iter->only_structured,
ret = nbd_co_receive_one_chunk(s, handle, iter->only_structured,
&request_ret, qiov, reply, payload,
&local_err);
if (ret < 0) {
@@ -1038,7 +1038,7 @@ static bool coroutine_fn nbd_reply_chunk_iter_receive(BDRVNBDState *s,
break_loop:
qemu_mutex_lock(&s->requests_lock);
s->requests[COOKIE_TO_INDEX(cookie)].coroutine = NULL;
s->requests[HANDLE_TO_INDEX(s, handle)].coroutine = NULL;
s->in_flight--;
qemu_co_queue_next(&s->free_sema);
qemu_mutex_unlock(&s->requests_lock);
@@ -1046,13 +1046,12 @@ break_loop:
return false;
}
static int coroutine_fn
nbd_co_receive_return_code(BDRVNBDState *s, uint64_t cookie,
int *request_ret, Error **errp)
static int coroutine_fn nbd_co_receive_return_code(BDRVNBDState *s, uint64_t handle,
int *request_ret, Error **errp)
{
NBDReplyChunkIter iter;
NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, false, NULL, NULL, NULL) {
NBD_FOREACH_REPLY_CHUNK(s, iter, handle, false, NULL, NULL, NULL) {
/* nbd_reply_chunk_iter_receive does all the work */
}
@@ -1061,17 +1060,16 @@ nbd_co_receive_return_code(BDRVNBDState *s, uint64_t cookie,
return iter.ret;
}
static int coroutine_fn
nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,
uint64_t offset, QEMUIOVector *qiov,
int *request_ret, Error **errp)
static int coroutine_fn nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t handle,
uint64_t offset, QEMUIOVector *qiov,
int *request_ret, Error **errp)
{
NBDReplyChunkIter iter;
NBDReply reply;
void *payload = NULL;
Error *local_err = NULL;
NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, s->info.structured_reply,
NBD_FOREACH_REPLY_CHUNK(s, iter, handle, s->info.structured_reply,
qiov, &reply, &payload)
{
int ret;
@@ -1114,10 +1112,10 @@ nbd_co_receive_cmdread_reply(BDRVNBDState *s, uint64_t cookie,
return iter.ret;
}
static int coroutine_fn
nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
uint64_t length, NBDExtent *extent,
int *request_ret, Error **errp)
static int coroutine_fn nbd_co_receive_blockstatus_reply(BDRVNBDState *s,
uint64_t handle, uint64_t length,
NBDExtent *extent,
int *request_ret, Error **errp)
{
NBDReplyChunkIter iter;
NBDReply reply;
@@ -1126,7 +1124,7 @@ nbd_co_receive_blockstatus_reply(BDRVNBDState *s, uint64_t cookie,
bool received = false;
assert(!extent->length);
NBD_FOREACH_REPLY_CHUNK(s, iter, cookie, false, NULL, &reply, &payload) {
NBD_FOREACH_REPLY_CHUNK(s, iter, handle, false, NULL, &reply, &payload) {
int ret;
NBDStructuredReplyChunk *chunk = &reply.structured;
@@ -1196,11 +1194,11 @@ nbd_co_request(BlockDriverState *bs, NBDRequest *request,
continue;
}
ret = nbd_co_receive_return_code(s, request->cookie,
ret = nbd_co_receive_return_code(s, request->handle,
&request_ret, &local_err);
if (local_err) {
trace_nbd_co_request_fail(request->from, request->len,
request->cookie, request->flags,
request->handle, request->flags,
request->type,
nbd_cmd_lookup(request->type),
ret, error_get_pretty(local_err));
@@ -1255,10 +1253,10 @@ nbd_client_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
continue;
}
ret = nbd_co_receive_cmdread_reply(s, request.cookie, offset, qiov,
ret = nbd_co_receive_cmdread_reply(s, request.handle, offset, qiov,
&request_ret, &local_err);
if (local_err) {
trace_nbd_co_request_fail(request.from, request.len, request.cookie,
trace_nbd_co_request_fail(request.from, request.len, request.handle,
request.flags, request.type,
nbd_cmd_lookup(request.type),
ret, error_get_pretty(local_err));
@@ -1413,11 +1411,11 @@ static int coroutine_fn GRAPH_RDLOCK nbd_client_co_block_status(
continue;
}
ret = nbd_co_receive_blockstatus_reply(s, request.cookie, bytes,
ret = nbd_co_receive_blockstatus_reply(s, request.handle, bytes,
&extent, &request_ret,
&local_err);
if (local_err) {
trace_nbd_co_request_fail(request.from, request.len, request.cookie,
trace_nbd_co_request_fail(request.from, request.len, request.handle,
request.flags, request.type,
nbd_cmd_lookup(request.type),
ret, error_get_pretty(local_err));

View File

@@ -501,9 +501,8 @@ static void nvme_submit_command(NVMeQueuePair *q, NVMeRequest *req,
q->sq.tail * NVME_SQ_ENTRY_BYTES, cmd, sizeof(*cmd));
q->sq.tail = (q->sq.tail + 1) % NVME_QUEUE_SIZE;
q->need_kick++;
qemu_mutex_unlock(&q->lock);
blk_io_plug_call(nvme_unplug_fn, q);
qemu_mutex_unlock(&q->lock);
}
static void nvme_admin_cmd_sync_cb(void *opaque, int ret)

View File

@@ -200,7 +200,7 @@ allocate_clusters(BlockDriverState *bs, int64_t sector_num,
assert(idx < s->bat_size && idx + to_allocate <= s->bat_size);
space = to_allocate * s->tracks;
len = bdrv_co_getlength(bs->file->bs);
len = bdrv_getlength(bs->file->bs);
if (len < 0) {
return len;
}
@@ -448,7 +448,7 @@ parallels_check_outside_image(BlockDriverState *bs, BdrvCheckResult *res,
uint32_t i;
int64_t off, high_off, size;
size = bdrv_co_getlength(bs->file->bs);
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
res->check_errors++;
return size;

View File

@@ -370,7 +370,7 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
if (!allocate)
return 0;
/* allocate a new l2 entry */
l2_offset = bdrv_co_getlength(bs->file->bs);
l2_offset = bdrv_getlength(bs->file->bs);
if (l2_offset < 0) {
return l2_offset;
}
@@ -379,7 +379,7 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
/* update the L1 entry */
s->l1_table[l1_index] = l2_offset;
tmp = cpu_to_be64(l2_offset);
BLKDBG_CO_EVENT(bs->file, BLKDBG_L1_UPDATE);
BLKDBG_EVENT(bs->file, BLKDBG_L1_UPDATE);
ret = bdrv_co_pwrite_sync(bs->file,
s->l1_table_offset + l1_index * sizeof(tmp),
sizeof(tmp), &tmp, 0);
@@ -410,7 +410,7 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
}
}
l2_table = s->l2_cache + (min_index << s->l2_bits);
BLKDBG_CO_EVENT(bs->file, BLKDBG_L2_LOAD);
BLKDBG_EVENT(bs->file, BLKDBG_L2_LOAD);
if (new_l2_table) {
memset(l2_table, 0, s->l2_size * sizeof(uint64_t));
ret = bdrv_co_pwrite_sync(bs->file, l2_offset,
@@ -434,7 +434,7 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
((cluster_offset & QCOW_OFLAG_COMPRESSED) && allocate == 1)) {
if (!allocate)
return 0;
BLKDBG_CO_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC);
BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC);
assert(QEMU_IS_ALIGNED(n_start | n_end, BDRV_SECTOR_SIZE));
/* allocate a new cluster */
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) &&
@@ -445,20 +445,20 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
if (decompress_cluster(bs, cluster_offset) < 0) {
return -EIO;
}
cluster_offset = bdrv_co_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file->bs);
if ((int64_t) cluster_offset < 0) {
return cluster_offset;
}
cluster_offset = QEMU_ALIGN_UP(cluster_offset, s->cluster_size);
/* write the cluster content */
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwrite(bs->file, cluster_offset, s->cluster_size,
s->cluster_cache, 0);
if (ret < 0) {
return ret;
}
} else {
cluster_offset = bdrv_co_getlength(bs->file->bs);
cluster_offset = bdrv_getlength(bs->file->bs);
if ((int64_t) cluster_offset < 0) {
return cluster_offset;
}
@@ -491,7 +491,7 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
NULL) < 0) {
return -EIO;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwrite(bs->file, cluster_offset + i,
BDRV_SECTOR_SIZE,
s->cluster_data, 0);
@@ -510,9 +510,9 @@ get_cluster_offset(BlockDriverState *bs, uint64_t offset, int allocate,
tmp = cpu_to_be64(cluster_offset);
l2_table[l2_index] = tmp;
if (allocate == 2) {
BLKDBG_CO_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
} else {
BLKDBG_CO_EVENT(bs->file, BLKDBG_L2_UPDATE);
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE);
}
ret = bdrv_co_pwrite_sync(bs->file, l2_offset + l2_index * sizeof(tmp),
sizeof(tmp), &tmp, 0);
@@ -595,7 +595,7 @@ decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset)
if (s->cluster_cache_offset != coffset) {
csize = cluster_offset >> (63 - s->cluster_bits);
csize &= (s->cluster_size - 1);
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
BLKDBG_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
ret = bdrv_co_pread(bs->file, coffset, csize, s->cluster_data, 0);
if (ret < 0)
return -1;
@@ -657,7 +657,7 @@ qcow_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
/* read from the base image */
qemu_co_mutex_unlock(&s->lock);
/* qcow2 emits this on bs->file instead of bs->backing */
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
ret = bdrv_co_pread(bs->backing, offset, n, buf, 0);
qemu_co_mutex_lock(&s->lock);
if (ret < 0) {
@@ -680,7 +680,7 @@ qcow_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
break;
}
qemu_co_mutex_unlock(&s->lock);
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
ret = bdrv_co_pread(bs->file, cluster_offset + offset_in_cluster,
n, buf, 0);
qemu_co_mutex_lock(&s->lock);
@@ -765,7 +765,7 @@ qcow_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
}
qemu_co_mutex_unlock(&s->lock);
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwrite(bs->file, cluster_offset + offset_in_cluster,
n, buf, 0);
qemu_co_mutex_lock(&s->lock);
@@ -1114,7 +1114,7 @@ qcow_co_pwritev_compressed(BlockDriverState *bs, int64_t offset, int64_t bytes,
}
cluster_offset &= s->cluster_offset_mask;
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_COMPRESSED);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_COMPRESSED);
ret = bdrv_co_pwrite(bs->file, cluster_offset, out_len, out_buf, 0);
if (ret < 0) {
goto fail;

View File

@@ -283,9 +283,10 @@ static int free_bitmap_clusters(BlockDriverState *bs, Qcow2BitmapTable *tb)
/* load_bitmap_data
* @bitmap_table entries must satisfy specification constraints.
* @bitmap must be cleared */
static int coroutine_fn GRAPH_RDLOCK
load_bitmap_data(BlockDriverState *bs, const uint64_t *bitmap_table,
uint32_t bitmap_table_size, BdrvDirtyBitmap *bitmap)
static int load_bitmap_data(BlockDriverState *bs,
const uint64_t *bitmap_table,
uint32_t bitmap_table_size,
BdrvDirtyBitmap *bitmap)
{
int ret = 0;
BDRVQcow2State *s = bs->opaque;
@@ -318,7 +319,7 @@ load_bitmap_data(BlockDriverState *bs, const uint64_t *bitmap_table,
* already cleared */
}
} else {
ret = bdrv_co_pread(bs->file, data_offset, s->cluster_size, buf, 0);
ret = bdrv_pread(bs->file, data_offset, s->cluster_size, buf, 0);
if (ret < 0) {
goto finish;
}
@@ -336,9 +337,8 @@ finish:
return ret;
}
static coroutine_fn GRAPH_RDLOCK
BdrvDirtyBitmap *load_bitmap(BlockDriverState *bs,
Qcow2Bitmap *bm, Error **errp)
static BdrvDirtyBitmap *load_bitmap(BlockDriverState *bs,
Qcow2Bitmap *bm, Error **errp)
{
int ret;
uint64_t *bitmap_table = NULL;
@@ -649,10 +649,9 @@ fail:
return NULL;
}
int coroutine_fn
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size)
int qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size)
{
int ret;
BDRVQcow2State *s = bs->opaque;
@@ -958,9 +957,8 @@ static void set_readonly_helper(gpointer bitmap, gpointer value)
* If header_updated is not NULL then it is set appropriately regardless of
* the return value.
*/
bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs,
bool *header_updated, Error **errp)
bool coroutine_fn qcow2_load_dirty_bitmaps(BlockDriverState *bs,
bool *header_updated, Error **errp)
{
BDRVQcow2State *s = bs->opaque;
Qcow2BitmapList *bm_list;

View File

@@ -48,7 +48,7 @@ int coroutine_fn qcow2_shrink_l1_table(BlockDriverState *bs,
fprintf(stderr, "shrink l1_table from %d to %d\n", s->l1_size, new_l1_size);
#endif
BLKDBG_CO_EVENT(bs->file, BLKDBG_L1_SHRINK_WRITE_TABLE);
BLKDBG_EVENT(bs->file, BLKDBG_L1_SHRINK_WRITE_TABLE);
ret = bdrv_co_pwrite_zeroes(bs->file,
s->l1_table_offset + new_l1_size * L1E_SIZE,
(s->l1_size - new_l1_size) * L1E_SIZE, 0);
@@ -61,7 +61,7 @@ int coroutine_fn qcow2_shrink_l1_table(BlockDriverState *bs,
goto fail;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_L1_SHRINK_FREE_L2_CLUSTERS);
BLKDBG_EVENT(bs->file, BLKDBG_L1_SHRINK_FREE_L2_CLUSTERS);
for (i = s->l1_size - 1; i > new_l1_size - 1; i--) {
if ((s->l1_table[i] & L1E_OFFSET_MASK) == 0) {
continue;
@@ -501,7 +501,7 @@ do_perform_cow_read(BlockDriverState *bs, uint64_t src_cluster_offset,
return 0;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_COW_READ);
BLKDBG_EVENT(bs->file, BLKDBG_COW_READ);
if (!bs->drv) {
return -ENOMEDIUM;
@@ -551,7 +551,7 @@ do_perform_cow_write(BlockDriverState *bs, uint64_t cluster_offset,
return ret;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_COW_WRITE);
BLKDBG_EVENT(bs->file, BLKDBG_COW_WRITE);
ret = bdrv_co_pwritev(s->data_file, cluster_offset + offset_in_cluster,
qiov->size, qiov, 0);
if (ret < 0) {
@@ -823,9 +823,10 @@ static int get_cluster_table(BlockDriverState *bs, uint64_t offset,
*
* Return 0 on success and -errno in error cases
*/
int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
int compressed_size, uint64_t *host_offset)
int coroutine_fn qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size,
uint64_t *host_offset)
{
BDRVQcow2State *s = bs->opaque;
int l2_index, ret;
@@ -871,7 +872,7 @@ qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
/* compressed clusters never have the copied flag */
BLKDBG_CO_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
BLKDBG_EVENT(bs->file, BLKDBG_L2_UPDATE_COMPRESSED);
qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_slice);
set_l2_entry(s, l2_slice, l2_index, cluster_offset);
if (has_subclusters(s)) {
@@ -991,7 +992,7 @@ perform_cow(BlockDriverState *bs, QCowL2Meta *m)
/* NOTE: we have a write_aio blkdebug event here followed by
* a cow_write one in do_perform_cow_write(), but there's only
* one single I/O operation */
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = do_perform_cow_write(bs, m->alloc_offset, start->offset, &qiov);
} else {
/* If there's no guest data then write both COW regions separately */
@@ -2037,9 +2038,8 @@ fail:
* all clusters in the same L2 slice) and returns the number of zeroed
* clusters.
*/
static int coroutine_fn
zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
uint64_t nb_clusters, int flags)
static int zero_in_l2_slice(BlockDriverState *bs, uint64_t offset,
uint64_t nb_clusters, int flags)
{
BDRVQcow2State *s = bs->opaque;
uint64_t *l2_slice;

View File

@@ -118,7 +118,7 @@ int coroutine_fn qcow2_refcount_init(BlockDriverState *bs)
ret = -ENOMEM;
goto fail;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_REFTABLE_LOAD);
BLKDBG_EVENT(bs->file, BLKDBG_REFTABLE_LOAD);
ret = bdrv_co_pread(bs->file, s->refcount_table_offset,
refcount_table_size2, s->refcount_table, 0);
if (ret < 0) {
@@ -1069,14 +1069,14 @@ int64_t coroutine_fn qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offs
/* only used to allocate compressed sectors. We try to allocate
contiguous sectors. size must be <= cluster_size */
int64_t coroutine_fn GRAPH_RDLOCK qcow2_alloc_bytes(BlockDriverState *bs, int size)
int64_t coroutine_fn qcow2_alloc_bytes(BlockDriverState *bs, int size)
{
BDRVQcow2State *s = bs->opaque;
int64_t offset;
size_t free_in_cluster;
int ret;
BLKDBG_CO_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_BYTES);
BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_BYTES);
assert(size > 0 && size <= s->cluster_size);
assert(!s->free_byte_offset || offset_into_cluster(s, s->free_byte_offset));
@@ -1524,11 +1524,10 @@ static int realloc_refcount_array(BDRVQcow2State *s, void **array,
*
* Modifies the number of errors in res.
*/
int coroutine_fn GRAPH_RDLOCK
qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size,
int64_t offset, int64_t size)
int qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size,
int64_t offset, int64_t size)
{
BDRVQcow2State *s = bs->opaque;
uint64_t start, last, cluster_offset, k, refcount;
@@ -1539,7 +1538,7 @@ qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
return 0;
}
file_len = bdrv_co_getlength(bs->file->bs);
file_len = bdrv_getlength(bs->file->bs);
if (file_len < 0) {
return file_len;
}
@@ -1601,11 +1600,10 @@ enum {
*
* On failure in-memory @l2_table may be modified.
*/
static int coroutine_fn GRAPH_RDLOCK
fix_l2_entry_by_zero(BlockDriverState *bs, BdrvCheckResult *res,
uint64_t l2_offset, uint64_t *l2_table,
int l2_index, bool active,
bool *metadata_overlap)
static int fix_l2_entry_by_zero(BlockDriverState *bs, BdrvCheckResult *res,
uint64_t l2_offset,
uint64_t *l2_table, int l2_index, bool active,
bool *metadata_overlap)
{
BDRVQcow2State *s = bs->opaque;
int ret;
@@ -1636,8 +1634,8 @@ fix_l2_entry_by_zero(BlockDriverState *bs, BdrvCheckResult *res,
goto fail;
}
ret = bdrv_co_pwrite_sync(bs->file, l2e_offset, l2_entry_size(s),
&l2_table[idx], 0);
ret = bdrv_pwrite_sync(bs->file, l2e_offset, l2_entry_size(s),
&l2_table[idx], 0);
if (ret < 0) {
fprintf(stderr, "ERROR: Failed to overwrite L2 "
"table entry: %s\n", strerror(-ret));
@@ -1661,11 +1659,10 @@ fail:
* Returns the number of errors found by the checks or -errno if an internal
* error occurred.
*/
static int coroutine_fn GRAPH_RDLOCK
check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size, int64_t l2_offset,
int flags, BdrvCheckMode fix, bool active)
static int check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size, int64_t l2_offset,
int flags, BdrvCheckMode fix, bool active)
{
BDRVQcow2State *s = bs->opaque;
uint64_t l2_entry, l2_bitmap;
@@ -1676,7 +1673,7 @@ check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
bool metadata_overlap;
/* Read L2 table from disk */
ret = bdrv_co_pread(bs->file, l2_offset, l2_size_bytes, l2_table, 0);
ret = bdrv_pread(bs->file, l2_offset, l2_size_bytes, l2_table, 0);
if (ret < 0) {
fprintf(stderr, "ERROR: I/O error in check_refcounts_l2\n");
res->check_errors++;
@@ -1861,11 +1858,12 @@ check_refcounts_l2(BlockDriverState *bs, BdrvCheckResult *res,
* Returns the number of errors found by the checks or -errno if an internal
* error occurred.
*/
static int coroutine_fn GRAPH_RDLOCK
check_refcounts_l1(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table, int64_t *refcount_table_size,
int64_t l1_table_offset, int l1_size,
int flags, BdrvCheckMode fix, bool active)
static int check_refcounts_l1(BlockDriverState *bs,
BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size,
int64_t l1_table_offset, int l1_size,
int flags, BdrvCheckMode fix, bool active)
{
BDRVQcow2State *s = bs->opaque;
size_t l1_size_bytes = l1_size * L1E_SIZE;
@@ -1891,7 +1889,7 @@ check_refcounts_l1(BlockDriverState *bs, BdrvCheckResult *res,
}
/* Read L1 table entries from disk */
ret = bdrv_co_pread(bs->file, l1_table_offset, l1_size_bytes, l1_table, 0);
ret = bdrv_pread(bs->file, l1_table_offset, l1_size_bytes, l1_table, 0);
if (ret < 0) {
fprintf(stderr, "ERROR: I/O error in check_refcounts_l1\n");
res->check_errors++;
@@ -1951,8 +1949,8 @@ check_refcounts_l1(BlockDriverState *bs, BdrvCheckResult *res,
* have been already detected and sufficiently signaled by the calling function
* (qcow2_check_refcounts) by the time this function is called).
*/
static int coroutine_fn GRAPH_RDLOCK
check_oflag_copied(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix)
static int check_oflag_copied(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVQcow2State *s = bs->opaque;
uint64_t *l2_table = qemu_blockalign(bs, s->cluster_size);
@@ -2007,8 +2005,8 @@ check_oflag_copied(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix
}
}
ret = bdrv_co_pread(bs->file, l2_offset, s->l2_size * l2_entry_size(s),
l2_table, 0);
ret = bdrv_pread(bs->file, l2_offset, s->l2_size * l2_entry_size(s),
l2_table, 0);
if (ret < 0) {
fprintf(stderr, "ERROR: Could not read L2 table: %s\n",
strerror(-ret));
@@ -2061,7 +2059,8 @@ check_oflag_copied(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix
goto fail;
}
ret = bdrv_co_pwrite(bs->file, l2_offset, s->cluster_size, l2_table, 0);
ret = bdrv_pwrite(bs->file, l2_offset, s->cluster_size, l2_table,
0);
if (ret < 0) {
fprintf(stderr, "ERROR: Could not write L2 table: %s\n",
strerror(-ret));
@@ -2084,10 +2083,9 @@ fail:
* Checks consistency of refblocks and accounts for each refblock in
* *refcount_table.
*/
static int coroutine_fn GRAPH_RDLOCK
check_refblocks(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
void **refcount_table, int64_t *nb_clusters)
static int check_refblocks(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
void **refcount_table, int64_t *nb_clusters)
{
BDRVQcow2State *s = bs->opaque;
int64_t i, size;
@@ -2129,13 +2127,13 @@ check_refblocks(BlockDriverState *bs, BdrvCheckResult *res,
goto resize_fail;
}
ret = bdrv_co_truncate(bs->file, offset + s->cluster_size, false,
PREALLOC_MODE_OFF, 0, &local_err);
ret = bdrv_truncate(bs->file, offset + s->cluster_size, false,
PREALLOC_MODE_OFF, 0, &local_err);
if (ret < 0) {
error_report_err(local_err);
goto resize_fail;
}
size = bdrv_co_getlength(bs->file->bs);
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
ret = size;
goto resize_fail;
@@ -2199,10 +2197,9 @@ resize_fail:
/*
* Calculates an in-memory refcount table.
*/
static int coroutine_fn GRAPH_RDLOCK
calculate_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
void **refcount_table, int64_t *nb_clusters)
static int calculate_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
void **refcount_table, int64_t *nb_clusters)
{
BDRVQcow2State *s = bs->opaque;
int64_t i;
@@ -2302,11 +2299,10 @@ calculate_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
* Compares the actual reference count for each cluster in the image against the
* refcount as reported by the refcount structures on-disk.
*/
static void coroutine_fn
compare_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
int64_t *highest_cluster,
void *refcount_table, int64_t nb_clusters)
static void compare_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix, bool *rebuild,
int64_t *highest_cluster,
void *refcount_table, int64_t nb_clusters)
{
BDRVQcow2State *s = bs->opaque;
int64_t i;
@@ -2467,8 +2463,7 @@ static int64_t alloc_clusters_imrt(BlockDriverState *bs,
* Return whether the on-disk reftable array was resized (true/false),
* or -errno on error.
*/
static int coroutine_fn GRAPH_RDLOCK
rebuild_refcounts_write_refblocks(
static int rebuild_refcounts_write_refblocks(
BlockDriverState *bs, void **refcount_table, int64_t *nb_clusters,
int64_t first_cluster, int64_t end_cluster,
uint64_t **on_disk_reftable_ptr, uint32_t *on_disk_reftable_entries_ptr,
@@ -2583,8 +2578,8 @@ rebuild_refcounts_write_refblocks(
on_disk_refblock = (void *)((char *) *refcount_table +
refblock_index * s->cluster_size);
ret = bdrv_co_pwrite(bs->file, refblock_offset, s->cluster_size,
on_disk_refblock, 0);
ret = bdrv_pwrite(bs->file, refblock_offset, s->cluster_size,
on_disk_refblock, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "ERROR writing refblock");
return ret;
@@ -2606,10 +2601,11 @@ rebuild_refcounts_write_refblocks(
* On success, the old refcount structure is leaked (it will be covered by the
* new refcount structure).
*/
static int coroutine_fn GRAPH_RDLOCK
rebuild_refcount_structure(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table, int64_t *nb_clusters,
Error **errp)
static int rebuild_refcount_structure(BlockDriverState *bs,
BdrvCheckResult *res,
void **refcount_table,
int64_t *nb_clusters,
Error **errp)
{
BDRVQcow2State *s = bs->opaque;
int64_t reftable_offset = -1;
@@ -2738,8 +2734,8 @@ rebuild_refcount_structure(BlockDriverState *bs, BdrvCheckResult *res,
}
assert(reftable_length < INT_MAX);
ret = bdrv_co_pwrite(bs->file, reftable_offset, reftable_length,
on_disk_reftable, 0);
ret = bdrv_pwrite(bs->file, reftable_offset, reftable_length,
on_disk_reftable, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "ERROR writing reftable");
goto fail;
@@ -2749,10 +2745,10 @@ rebuild_refcount_structure(BlockDriverState *bs, BdrvCheckResult *res,
reftable_offset_and_clusters.reftable_offset = cpu_to_be64(reftable_offset);
reftable_offset_and_clusters.reftable_clusters =
cpu_to_be32(reftable_clusters);
ret = bdrv_co_pwrite_sync(bs->file,
offsetof(QCowHeader, refcount_table_offset),
sizeof(reftable_offset_and_clusters),
&reftable_offset_and_clusters, 0);
ret = bdrv_pwrite_sync(bs->file,
offsetof(QCowHeader, refcount_table_offset),
sizeof(reftable_offset_and_clusters),
&reftable_offset_and_clusters, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "ERROR setting reftable");
goto fail;
@@ -2781,8 +2777,8 @@ fail:
* Returns 0 if no errors are found, the number of errors in case the image is
* detected as corrupted, and -errno when an internal error occurred.
*/
int coroutine_fn GRAPH_RDLOCK
qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix)
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix)
{
BDRVQcow2State *s = bs->opaque;
BdrvCheckResult pre_compare_res;
@@ -2791,7 +2787,7 @@ qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode
bool rebuild = false;
int ret;
size = bdrv_co_getlength(bs->file->bs);
size = bdrv_getlength(bs->file->bs);
if (size < 0) {
res->check_errors++;
return size;
@@ -3545,8 +3541,7 @@ done:
return ret;
}
static int64_t coroutine_fn get_refblock_offset(BlockDriverState *bs,
uint64_t offset)
static int64_t get_refblock_offset(BlockDriverState *bs, uint64_t offset)
{
BDRVQcow2State *s = bs->opaque;
uint32_t index = offset_to_reftable_index(s, offset);
@@ -3712,8 +3707,7 @@ int64_t coroutine_fn qcow2_get_last_cluster(BlockDriverState *bs, int64_t size)
return -EIO;
}
int coroutine_fn GRAPH_RDLOCK
qcow2_detect_metadata_preallocation(BlockDriverState *bs)
int coroutine_fn qcow2_detect_metadata_preallocation(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
int64_t i, end_cluster, cluster_count = 0, threshold;

View File

@@ -570,7 +570,7 @@ int qcow2_mark_corrupt(BlockDriverState *bs)
* Marks the image as consistent, i.e., unsets the corrupt bit, and flushes
* before if necessary.
*/
static int coroutine_fn qcow2_mark_consistent(BlockDriverState *bs)
int qcow2_mark_consistent(BlockDriverState *bs)
{
BDRVQcow2State *s = bs->opaque;
@@ -2225,7 +2225,7 @@ qcow2_co_preadv_encrypted(BlockDriverState *bs,
return -ENOMEM;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
ret = bdrv_co_pread(s->data_file, host_offset, bytes, buf, 0);
if (ret < 0) {
goto fail;
@@ -2315,7 +2315,7 @@ qcow2_co_preadv_task(BlockDriverState *bs, QCow2SubclusterType subc_type,
case QCOW2_SUBCLUSTER_UNALLOCATED_ALLOC:
assert(bs->backing); /* otherwise handled in qcow2_co_preadv_part */
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
return bdrv_co_preadv_part(bs->backing, offset, bytes,
qiov, qiov_offset, 0);
@@ -2329,7 +2329,7 @@ qcow2_co_preadv_task(BlockDriverState *bs, QCow2SubclusterType subc_type,
offset, bytes, qiov, qiov_offset);
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_preadv_part(s->data_file, host_offset,
bytes, qiov, qiov_offset, 0);
@@ -2539,7 +2539,7 @@ handle_alloc_space(BlockDriverState *bs, QCowL2Meta *l2meta)
return ret;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_SPACE);
BLKDBG_EVENT(bs->file, BLKDBG_CLUSTER_ALLOC_SPACE);
ret = bdrv_co_pwrite_zeroes(s->data_file, start_offset, nb_bytes,
BDRV_REQ_NO_FALLBACK);
if (ret < 0) {
@@ -2604,7 +2604,7 @@ int qcow2_co_pwritev_task(BlockDriverState *bs, uint64_t host_offset,
* guest data now.
*/
if (!merge_cow(offset, bytes, qiov, qiov_offset, l2meta)) {
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
trace_qcow2_writev_data(qemu_coroutine_self(), host_offset);
ret = bdrv_co_pwritev_part(s->data_file, host_offset,
bytes, qiov, qiov_offset, 0);
@@ -4678,7 +4678,7 @@ qcow2_co_pwritev_compressed_task(BlockDriverState *bs,
goto fail;
}
BLKDBG_CO_EVENT(s->data_file, BLKDBG_WRITE_COMPRESSED);
BLKDBG_EVENT(s->data_file, BLKDBG_WRITE_COMPRESSED);
ret = bdrv_co_pwrite(s->data_file, cluster_offset, out_len, out_buf, 0);
if (ret < 0) {
goto fail;
@@ -4797,7 +4797,7 @@ qcow2_co_preadv_compressed(BlockDriverState *bs,
out_buf = qemu_blockalign(bs, s->cluster_size);
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
BLKDBG_EVENT(bs->file, BLKDBG_READ_COMPRESSED);
ret = bdrv_co_pread(bs->file, coffset, csize, buf, 0);
if (ret < 0) {
goto fail;
@@ -5344,7 +5344,7 @@ qcow2_co_save_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos)
return offset;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_VMSTATE_SAVE);
BLKDBG_EVENT(bs->file, BLKDBG_VMSTATE_SAVE);
return bs->drv->bdrv_co_pwritev_part(bs, offset, qiov->size, qiov, 0, 0);
}
@@ -5356,7 +5356,7 @@ qcow2_co_load_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos)
return offset;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_VMSTATE_LOAD);
BLKDBG_EVENT(bs->file, BLKDBG_VMSTATE_LOAD);
return bs->drv->bdrv_co_preadv_part(bs, offset, qiov->size, qiov, 0, 0);
}

View File

@@ -836,6 +836,7 @@ int64_t qcow2_refcount_metadata_size(int64_t clusters, size_t cluster_size,
int qcow2_mark_dirty(BlockDriverState *bs);
int qcow2_mark_corrupt(BlockDriverState *bs);
int qcow2_mark_consistent(BlockDriverState *bs);
int qcow2_update_header(BlockDriverState *bs);
void qcow2_signal_corruption(BlockDriverState *bs, bool fatal, int64_t offset,
@@ -866,7 +867,7 @@ int64_t qcow2_refcount_area(BlockDriverState *bs, uint64_t offset,
int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
int64_t coroutine_fn qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int64_t nb_clusters);
int64_t coroutine_fn GRAPH_RDLOCK qcow2_alloc_bytes(BlockDriverState *bs, int size);
int64_t coroutine_fn qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size,
enum qcow2_discard_type type);
@@ -878,8 +879,8 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int qcow2_flush_caches(BlockDriverState *bs);
int qcow2_write_caches(BlockDriverState *bs);
int coroutine_fn qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
void qcow2_process_discards(BlockDriverState *bs, int ret);
@@ -887,10 +888,10 @@ int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size, bool data_file);
int coroutine_fn qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size,
int64_t offset, int64_t size);
int qcow2_inc_refcounts_imrt(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size,
int64_t offset, int64_t size);
int qcow2_change_refcount_order(BlockDriverState *bs, int refcount_order,
BlockDriverAmendStatusCB *status_cb,
@@ -918,9 +919,10 @@ int qcow2_get_host_offset(BlockDriverState *bs, uint64_t offset,
int coroutine_fn qcow2_alloc_host_offset(BlockDriverState *bs, uint64_t offset,
unsigned int *bytes,
uint64_t *host_offset, QCowL2Meta **m);
int coroutine_fn GRAPH_RDLOCK
qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs, uint64_t offset,
int compressed_size, uint64_t *host_offset);
int coroutine_fn qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size,
uint64_t *host_offset);
void qcow2_parse_compressed_l2_entry(BlockDriverState *bs, uint64_t l2_entry,
uint64_t *coffset, int *csize);
@@ -990,12 +992,11 @@ void *qcow2_cache_is_table_offset(Qcow2Cache *c, uint64_t offset);
void qcow2_cache_discard(Qcow2Cache *c, void *table);
/* qcow2-bitmap.c functions */
int coroutine_fn
qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size);
bool coroutine_fn GRAPH_RDLOCK
qcow2_load_dirty_bitmaps(BlockDriverState *bs, bool *header_updated, Error **errp);
int qcow2_check_bitmaps_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
void **refcount_table,
int64_t *refcount_table_size);
bool coroutine_fn qcow2_load_dirty_bitmaps(BlockDriverState *bs,
bool *header_updated, Error **errp);
bool qcow2_get_bitmap_info_list(BlockDriverState *bs,
Qcow2BitmapInfoList **info_list, Error **errp);
int qcow2_reopen_bitmaps_rw(BlockDriverState *bs, Error **errp);

View File

@@ -200,8 +200,7 @@ static void qed_check_for_leaks(QEDCheck *check)
/**
* Mark an image clean once it passes check or has been repaired
*/
static void coroutine_fn GRAPH_RDLOCK
qed_check_mark_clean(BDRVQEDState *s, BdrvCheckResult *result)
static void qed_check_mark_clean(BDRVQEDState *s, BdrvCheckResult *result)
{
/* Skip if there were unfixable corruptions or I/O errors */
if (result->corruptions > 0 || result->check_errors > 0) {
@@ -214,7 +213,7 @@ qed_check_mark_clean(BDRVQEDState *s, BdrvCheckResult *result)
}
/* Ensure fixes reach storage before clearing check bit */
bdrv_co_flush(s->bs);
bdrv_flush(s->bs);
s->header.features &= ~QED_F_NEED_CHECK;
qed_write_header_sync(s);

View File

@@ -122,7 +122,7 @@ int coroutine_fn qed_read_l1_table_sync(BDRVQEDState *s)
int coroutine_fn qed_write_l1_table(BDRVQEDState *s, unsigned int index,
unsigned int n)
{
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_L1_UPDATE);
BLKDBG_EVENT(s->bs->file, BLKDBG_L1_UPDATE);
return qed_write_table(s, s->header.l1_table_offset,
s->l1_table, index, n, false);
}
@@ -150,7 +150,7 @@ int coroutine_fn qed_read_l2_table(BDRVQEDState *s, QEDRequest *request,
request->l2_table = qed_alloc_l2_cache_entry(&s->l2_cache);
request->l2_table->table = qed_alloc_table(s);
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_L2_LOAD);
BLKDBG_EVENT(s->bs->file, BLKDBG_L2_LOAD);
ret = qed_read_table(s, offset, request->l2_table->table);
if (ret) {
@@ -183,7 +183,7 @@ int coroutine_fn qed_write_l2_table(BDRVQEDState *s, QEDRequest *request,
unsigned int index, unsigned int n,
bool flush)
{
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_L2_UPDATE);
BLKDBG_EVENT(s->bs->file, BLKDBG_L2_UPDATE);
return qed_write_table(s, request->l2_table->offset,
request->l2_table->table, index, n, flush);
}

View File

@@ -195,15 +195,14 @@ static bool qed_is_image_size_valid(uint64_t image_size, uint32_t cluster_size,
*
* The string is NUL-terminated.
*/
static int coroutine_fn GRAPH_RDLOCK
qed_read_string(BdrvChild *file, uint64_t offset,
size_t n, char *buf, size_t buflen)
static int qed_read_string(BdrvChild *file, uint64_t offset, size_t n,
char *buf, size_t buflen)
{
int ret;
if (n >= buflen) {
return -EINVAL;
}
ret = bdrv_co_pread(file, offset, n, buf, 0);
ret = bdrv_pread(file, offset, n, buf, 0);
if (ret < 0) {
return ret;
}
@@ -883,7 +882,7 @@ static int coroutine_fn GRAPH_RDLOCK
qed_read_backing_file(BDRVQEDState *s, uint64_t pos, QEMUIOVector *qiov)
{
if (s->bs->backing) {
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
BLKDBG_EVENT(s->bs->file, BLKDBG_READ_BACKING_AIO);
return bdrv_co_preadv(s->bs->backing, pos, qiov->size, qiov, 0);
}
qemu_iovec_memset(qiov, 0, 0, qiov->size);
@@ -918,7 +917,7 @@ qed_copy_from_backing_file(BDRVQEDState *s, uint64_t pos, uint64_t len,
goto out;
}
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_COW_WRITE);
BLKDBG_EVENT(s->bs->file, BLKDBG_COW_WRITE);
ret = bdrv_co_pwritev(s->bs->file, offset, qiov.size, &qiov, 0);
if (ret < 0) {
goto out;
@@ -1070,7 +1069,7 @@ static int coroutine_fn GRAPH_RDLOCK qed_aio_write_main(QEDAIOCB *acb)
trace_qed_aio_write_main(s, acb, 0, offset, acb->cur_qiov.size);
BLKDBG_CO_EVENT(s->bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(s->bs->file, BLKDBG_WRITE_AIO);
return bdrv_co_pwritev(s->bs->file, offset, acb->cur_qiov.size,
&acb->cur_qiov, 0);
}
@@ -1324,7 +1323,7 @@ qed_aio_read_data(void *opaque, int ret, uint64_t offset, size_t len)
} else if (ret != QED_CLUSTER_FOUND) {
r = qed_read_backing_file(s, acb->cur_pos, &acb->cur_qiov);
} else {
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
r = bdrv_co_preadv(bs->file, offset, acb->cur_qiov.size,
&acb->cur_qiov, 0);
}

View File

@@ -214,7 +214,7 @@ raw_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_AIO);
return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags);
}
@@ -268,7 +268,7 @@ raw_co_pwritev(BlockDriverState *bs, int64_t offset, int64_t bytes,
goto fail;
}
BLKDBG_CO_EVENT(bs->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_WRITE_AIO);
ret = bdrv_co_pwritev(bs->file, offset, bytes, qiov, flags);
fail:

View File

@@ -169,10 +169,9 @@ exit:
* It is assumed that 'buffer' is at least 4096*num_sectors large.
*
* 0 is returned on success, -errno otherwise */
static int coroutine_fn GRAPH_RDLOCK
vhdx_log_write_sectors(BlockDriverState *bs, VHDXLogEntries *log,
uint32_t *sectors_written, void *buffer,
uint32_t num_sectors)
static int vhdx_log_write_sectors(BlockDriverState *bs, VHDXLogEntries *log,
uint32_t *sectors_written, void *buffer,
uint32_t num_sectors)
{
int ret = 0;
uint64_t offset;
@@ -196,7 +195,8 @@ vhdx_log_write_sectors(BlockDriverState *bs, VHDXLogEntries *log,
/* full */
break;
}
ret = bdrv_co_pwrite(bs->file, offset, VHDX_LOG_SECTOR_SIZE, buffer_tmp, 0);
ret = bdrv_pwrite(bs->file, offset, VHDX_LOG_SECTOR_SIZE, buffer_tmp,
0);
if (ret < 0) {
goto exit;
}
@@ -853,9 +853,8 @@ static void vhdx_log_raw_to_le_sector(VHDXLogDescriptor *desc,
}
static int coroutine_fn GRAPH_RDLOCK
vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset)
static int vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset)
{
int ret = 0;
void *buffer = NULL;
@@ -925,7 +924,7 @@ vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
sectors += partial_sectors;
file_length = bdrv_co_getlength(bs->file->bs);
file_length = bdrv_getlength(bs->file->bs);
if (file_length < 0) {
ret = file_length;
goto exit;
@@ -972,8 +971,8 @@ vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
if (i == 0 && leading_length) {
/* partial sector at the front of the buffer */
ret = bdrv_co_pread(bs->file, file_offset, VHDX_LOG_SECTOR_SIZE,
merged_sector, 0);
ret = bdrv_pread(bs->file, file_offset, VHDX_LOG_SECTOR_SIZE,
merged_sector, 0);
if (ret < 0) {
goto exit;
}
@@ -982,9 +981,9 @@ vhdx_log_write(BlockDriverState *bs, BDRVVHDXState *s,
sector_write = merged_sector;
} else if (i == sectors - 1 && trailing_length) {
/* partial sector at the end of the buffer */
ret = bdrv_co_pread(bs->file, file_offset + trailing_length,
VHDX_LOG_SECTOR_SIZE - trailing_length,
merged_sector + trailing_length, 0);
ret = bdrv_pread(bs->file, file_offset + trailing_length,
VHDX_LOG_SECTOR_SIZE - trailing_length,
merged_sector + trailing_length, 0);
if (ret < 0) {
goto exit;
}
@@ -1037,9 +1036,8 @@ exit:
}
/* Perform a log write, and then immediately flush the entire log */
int coroutine_fn
vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset)
int vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset)
{
int ret = 0;
VHDXLogSequence logs = { .valid = true,
@@ -1049,7 +1047,7 @@ vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
/* Make sure data written (new and/or changed blocks) is stable
* on disk, before creating log entry */
ret = bdrv_co_flush(bs);
ret = bdrv_flush(bs);
if (ret < 0) {
goto exit;
}
@@ -1061,7 +1059,7 @@ vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
logs.log = s->log;
/* Make sure log is stable on disk */
ret = bdrv_co_flush(bs);
ret = bdrv_flush(bs);
if (ret < 0) {
goto exit;
}

View File

@@ -1250,13 +1250,12 @@ exit:
*
* Returns the file offset start of the new payload block
*/
static int coroutine_fn GRAPH_RDLOCK
vhdx_allocate_block(BlockDriverState *bs, BDRVVHDXState *s,
uint64_t *new_offset, bool *need_zero)
static int vhdx_allocate_block(BlockDriverState *bs, BDRVVHDXState *s,
uint64_t *new_offset, bool *need_zero)
{
int64_t current_len;
current_len = bdrv_co_getlength(bs->file->bs);
current_len = bdrv_getlength(bs->file->bs);
if (current_len < 0) {
return current_len;
}
@@ -1272,16 +1271,16 @@ vhdx_allocate_block(BlockDriverState *bs, BDRVVHDXState *s,
if (*need_zero) {
int ret;
ret = bdrv_co_truncate(bs->file, *new_offset + s->block_size, false,
PREALLOC_MODE_OFF, BDRV_REQ_ZERO_WRITE, NULL);
ret = bdrv_truncate(bs->file, *new_offset + s->block_size, false,
PREALLOC_MODE_OFF, BDRV_REQ_ZERO_WRITE, NULL);
if (ret != -ENOTSUP) {
*need_zero = false;
return ret;
}
}
return bdrv_co_truncate(bs->file, *new_offset + s->block_size, false,
PREALLOC_MODE_OFF, 0, NULL);
return bdrv_truncate(bs->file, *new_offset + s->block_size, false,
PREALLOC_MODE_OFF, 0, NULL);
}
/*
@@ -1573,10 +1572,12 @@ exit:
* The first 64KB of the Metadata section is reserved for the metadata
* header and entries; beyond that, the metadata items themselves reside.
*/
static int coroutine_fn
vhdx_create_new_metadata(BlockBackend *blk, uint64_t image_size,
uint32_t block_size, uint32_t sector_size,
uint64_t metadata_offset, VHDXImageType type)
static int vhdx_create_new_metadata(BlockBackend *blk,
uint64_t image_size,
uint32_t block_size,
uint32_t sector_size,
uint64_t metadata_offset,
VHDXImageType type)
{
int ret = 0;
uint32_t offset = 0;
@@ -1667,13 +1668,13 @@ vhdx_create_new_metadata(BlockBackend *blk, uint64_t image_size,
VHDX_META_FLAGS_IS_VIRTUAL_DISK;
vhdx_metadata_entry_le_export(&md_table_entry[4]);
ret = blk_co_pwrite(blk, metadata_offset, VHDX_HEADER_BLOCK_SIZE, buffer, 0);
ret = blk_pwrite(blk, metadata_offset, VHDX_HEADER_BLOCK_SIZE, buffer, 0);
if (ret < 0) {
goto exit;
}
ret = blk_co_pwrite(blk, metadata_offset + (64 * KiB),
VHDX_METADATA_ENTRY_BUFFER_SIZE, entry_buffer, 0);
ret = blk_pwrite(blk, metadata_offset + (64 * KiB),
VHDX_METADATA_ENTRY_BUFFER_SIZE, entry_buffer, 0);
if (ret < 0) {
goto exit;
}
@@ -1693,11 +1694,10 @@ exit:
* Fixed images: default state of the BAT is fully populated, with
* file offsets and state PAYLOAD_BLOCK_FULLY_PRESENT.
*/
static int coroutine_fn
vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t image_size, VHDXImageType type,
bool use_zero_blocks, uint64_t file_offset,
uint32_t length, Error **errp)
static int vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
uint64_t image_size, VHDXImageType type,
bool use_zero_blocks, uint64_t file_offset,
uint32_t length, Error **errp)
{
int ret = 0;
uint64_t data_file_offset;
@@ -1718,14 +1718,14 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
if (type == VHDX_TYPE_DYNAMIC) {
/* All zeroes, so we can just extend the file - the end of the BAT
* is the furthest thing we have written yet */
ret = blk_co_truncate(blk, data_file_offset, false, PREALLOC_MODE_OFF,
0, errp);
ret = blk_truncate(blk, data_file_offset, false, PREALLOC_MODE_OFF,
0, errp);
if (ret < 0) {
goto exit;
}
} else if (type == VHDX_TYPE_FIXED) {
ret = blk_co_truncate(blk, data_file_offset + image_size, false,
PREALLOC_MODE_OFF, 0, errp);
ret = blk_truncate(blk, data_file_offset + image_size, false,
PREALLOC_MODE_OFF, 0, errp);
if (ret < 0) {
goto exit;
}
@@ -1759,7 +1759,7 @@ vhdx_create_bat(BlockBackend *blk, BDRVVHDXState *s,
s->bat[sinfo.bat_idx] = cpu_to_le64(s->bat[sinfo.bat_idx]);
sector_num += s->sectors_per_block;
}
ret = blk_co_pwrite(blk, file_offset, length, s->bat, 0);
ret = blk_pwrite(blk, file_offset, length, s->bat, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to write the BAT");
goto exit;
@@ -1780,12 +1780,15 @@ exit:
* to create the BAT itself, we will also cause the BAT to be
* created.
*/
static int coroutine_fn
vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
uint32_t block_size, uint32_t sector_size,
uint32_t log_size, bool use_zero_blocks,
VHDXImageType type, uint64_t *metadata_offset,
Error **errp)
static int vhdx_create_new_region_table(BlockBackend *blk,
uint64_t image_size,
uint32_t block_size,
uint32_t sector_size,
uint32_t log_size,
bool use_zero_blocks,
VHDXImageType type,
uint64_t *metadata_offset,
Error **errp)
{
int ret = 0;
uint32_t offset = 0;
@@ -1860,15 +1863,15 @@ vhdx_create_new_region_table(BlockBackend *blk, uint64_t image_size,
}
/* Now write out the region headers to disk */
ret = blk_co_pwrite(blk, VHDX_REGION_TABLE_OFFSET, VHDX_HEADER_BLOCK_SIZE,
buffer, 0);
ret = blk_pwrite(blk, VHDX_REGION_TABLE_OFFSET, VHDX_HEADER_BLOCK_SIZE,
buffer, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to write first region table");
goto exit;
}
ret = blk_co_pwrite(blk, VHDX_REGION_TABLE2_OFFSET, VHDX_HEADER_BLOCK_SIZE,
buffer, 0);
ret = blk_pwrite(blk, VHDX_REGION_TABLE2_OFFSET, VHDX_HEADER_BLOCK_SIZE,
buffer, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to write second region table");
goto exit;

View File

@@ -413,9 +413,8 @@ bool vhdx_checksum_is_valid(uint8_t *buf, size_t size, int crc_offset);
int vhdx_parse_log(BlockDriverState *bs, BDRVVHDXState *s, bool *flushed,
Error **errp);
int coroutine_fn GRAPH_RDLOCK
vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset);
int vhdx_log_write_and_flush(BlockDriverState *bs, BDRVVHDXState *s,
void *data, uint32_t length, uint64_t offset);
static inline void leguid_to_cpus(MSGUID *guid)
{

View File

@@ -339,8 +339,7 @@ out:
return ret;
}
static int coroutine_fn GRAPH_RDLOCK
vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
static int vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
{
char *desc, *tmp_desc;
char *p_name, *tmp_str;
@@ -349,7 +348,7 @@ vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
desc = g_malloc0(DESC_SIZE);
tmp_desc = g_malloc0(DESC_SIZE);
ret = bdrv_co_pread(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
ret = bdrv_pread(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
if (ret < 0) {
goto out;
}
@@ -369,7 +368,7 @@ vmdk_write_cid(BlockDriverState *bs, uint32_t cid)
pstrcat(desc, DESC_SIZE, tmp_desc);
}
ret = bdrv_co_pwrite_sync(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
ret = bdrv_pwrite_sync(bs->file, s->desc_offset, DESC_SIZE, desc, 0);
out:
g_free(desc);
@@ -1438,7 +1437,7 @@ get_whole_cluster(BlockDriverState *bs, VmdkExtent *extent,
if (skip_start_bytes > 0) {
if (copy_from_backing) {
/* qcow2 emits this on bs->file instead of bs->backing */
BLKDBG_CO_EVENT(extent->file, BLKDBG_COW_READ);
BLKDBG_EVENT(extent->file, BLKDBG_COW_READ);
ret = bdrv_co_pread(bs->backing, offset, skip_start_bytes,
whole_grain, 0);
if (ret < 0) {
@@ -1446,7 +1445,7 @@ get_whole_cluster(BlockDriverState *bs, VmdkExtent *extent,
goto exit;
}
}
BLKDBG_CO_EVENT(extent->file, BLKDBG_COW_WRITE);
BLKDBG_EVENT(extent->file, BLKDBG_COW_WRITE);
ret = bdrv_co_pwrite(extent->file, cluster_offset, skip_start_bytes,
whole_grain, 0);
if (ret < 0) {
@@ -1458,7 +1457,7 @@ get_whole_cluster(BlockDriverState *bs, VmdkExtent *extent,
if (skip_end_bytes < cluster_bytes) {
if (copy_from_backing) {
/* qcow2 emits this on bs->file instead of bs->backing */
BLKDBG_CO_EVENT(extent->file, BLKDBG_COW_READ);
BLKDBG_EVENT(extent->file, BLKDBG_COW_READ);
ret = bdrv_co_pread(bs->backing, offset + skip_end_bytes,
cluster_bytes - skip_end_bytes,
whole_grain + skip_end_bytes, 0);
@@ -1467,7 +1466,7 @@ get_whole_cluster(BlockDriverState *bs, VmdkExtent *extent,
goto exit;
}
}
BLKDBG_CO_EVENT(extent->file, BLKDBG_COW_WRITE);
BLKDBG_EVENT(extent->file, BLKDBG_COW_WRITE);
ret = bdrv_co_pwrite(extent->file, cluster_offset + skip_end_bytes,
cluster_bytes - skip_end_bytes,
whole_grain + skip_end_bytes, 0);
@@ -1488,7 +1487,7 @@ vmdk_L2update(VmdkExtent *extent, VmdkMetaData *m_data, uint32_t offset)
{
offset = cpu_to_le32(offset);
/* update L2 table */
BLKDBG_CO_EVENT(extent->file, BLKDBG_L2_UPDATE);
BLKDBG_EVENT(extent->file, BLKDBG_L2_UPDATE);
if (bdrv_co_pwrite(extent->file,
((int64_t)m_data->l2_offset * 512)
+ (m_data->l2_index * sizeof(offset)),
@@ -1618,7 +1617,7 @@ get_cluster_offset(BlockDriverState *bs, VmdkExtent *extent,
}
}
l2_table = (char *)extent->l2_cache + (min_index * l2_size_bytes);
BLKDBG_CO_EVENT(extent->file, BLKDBG_L2_LOAD);
BLKDBG_EVENT(extent->file, BLKDBG_L2_LOAD);
if (bdrv_co_pread(extent->file,
(int64_t)l2_offset * 512,
l2_size_bytes,
@@ -1829,12 +1828,12 @@ vmdk_write_extent(VmdkExtent *extent, int64_t cluster_offset,
n_bytes = buf_len + sizeof(VmdkGrainMarker);
qemu_iovec_init_buf(&local_qiov, data, n_bytes);
BLKDBG_CO_EVENT(extent->file, BLKDBG_WRITE_COMPRESSED);
BLKDBG_EVENT(extent->file, BLKDBG_WRITE_COMPRESSED);
} else {
qemu_iovec_init(&local_qiov, qiov->niov);
qemu_iovec_concat(&local_qiov, qiov, qiov_offset, n_bytes);
BLKDBG_CO_EVENT(extent->file, BLKDBG_WRITE_AIO);
BLKDBG_EVENT(extent->file, BLKDBG_WRITE_AIO);
}
write_offset = cluster_offset + offset_in_cluster;
@@ -1876,7 +1875,7 @@ vmdk_read_extent(VmdkExtent *extent, int64_t cluster_offset,
if (!extent->compressed) {
BLKDBG_CO_EVENT(extent->file, BLKDBG_READ_AIO);
BLKDBG_EVENT(extent->file, BLKDBG_READ_AIO);
ret = bdrv_co_preadv(extent->file,
cluster_offset + offset_in_cluster, bytes,
qiov, 0);
@@ -1890,7 +1889,7 @@ vmdk_read_extent(VmdkExtent *extent, int64_t cluster_offset,
buf_bytes = cluster_bytes * 2;
cluster_buf = g_malloc(buf_bytes);
uncomp_buf = g_malloc(cluster_bytes);
BLKDBG_CO_EVENT(extent->file, BLKDBG_READ_COMPRESSED);
BLKDBG_EVENT(extent->file, BLKDBG_READ_COMPRESSED);
ret = bdrv_co_pread(extent->file, cluster_offset, buf_bytes, cluster_buf,
0);
if (ret < 0) {
@@ -1968,7 +1967,7 @@ vmdk_co_preadv(BlockDriverState *bs, int64_t offset, int64_t bytes,
qemu_iovec_concat(&local_qiov, qiov, bytes_done, n_bytes);
/* qcow2 emits this on bs->file instead of bs->backing */
BLKDBG_CO_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
BLKDBG_EVENT(bs->file, BLKDBG_READ_BACKING_AIO);
ret = bdrv_co_preadv(bs->backing, offset, n_bytes,
&local_qiov, 0);
if (ret < 0) {
@@ -2132,7 +2131,7 @@ vmdk_co_pwritev_compressed(BlockDriverState *bs, int64_t offset, int64_t bytes,
int64_t length;
for (i = 0; i < s->num_extents; i++) {
length = bdrv_co_getlength(s->extents[i].file->bs);
length = bdrv_getlength(s->extents[i].file->bs);
if (length < 0) {
return length;
}
@@ -2166,7 +2165,7 @@ vmdk_co_pwrite_zeroes(BlockDriverState *bs, int64_t offset, int64_t bytes,
return ret;
}
static int coroutine_fn GRAPH_UNLOCKED
static int GRAPH_UNLOCKED
vmdk_init_extent(BlockBackend *blk, int64_t filesize, bool flat, bool compress,
bool zeroed_grain, Error **errp)
{
@@ -2177,7 +2176,7 @@ vmdk_init_extent(BlockBackend *blk, int64_t filesize, bool flat, bool compress,
int gd_buf_size;
if (flat) {
ret = blk_co_truncate(blk, filesize, false, PREALLOC_MODE_OFF, 0, errp);
ret = blk_truncate(blk, filesize, false, PREALLOC_MODE_OFF, 0, errp);
goto exit;
}
magic = cpu_to_be32(VMDK4_MAGIC);
@@ -2229,19 +2228,19 @@ vmdk_init_extent(BlockBackend *blk, int64_t filesize, bool flat, bool compress,
header.check_bytes[3] = 0xa;
/* write all the data */
ret = blk_co_pwrite(blk, 0, sizeof(magic), &magic, 0);
ret = blk_pwrite(blk, 0, sizeof(magic), &magic, 0);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
goto exit;
}
ret = blk_co_pwrite(blk, sizeof(magic), sizeof(header), &header, 0);
ret = blk_pwrite(blk, sizeof(magic), sizeof(header), &header, 0);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
goto exit;
}
ret = blk_co_truncate(blk, le64_to_cpu(header.grain_offset) << 9, false,
PREALLOC_MODE_OFF, 0, errp);
ret = blk_truncate(blk, le64_to_cpu(header.grain_offset) << 9, false,
PREALLOC_MODE_OFF, 0, errp);
if (ret < 0) {
goto exit;
}
@@ -2253,8 +2252,8 @@ vmdk_init_extent(BlockBackend *blk, int64_t filesize, bool flat, bool compress,
i < gt_count; i++, tmp += gt_size) {
gd_buf[i] = cpu_to_le32(tmp);
}
ret = blk_co_pwrite(blk, le64_to_cpu(header.rgd_offset) * BDRV_SECTOR_SIZE,
gd_buf_size, gd_buf, 0);
ret = blk_pwrite(blk, le64_to_cpu(header.rgd_offset) * BDRV_SECTOR_SIZE,
gd_buf_size, gd_buf, 0);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
goto exit;
@@ -2265,8 +2264,8 @@ vmdk_init_extent(BlockBackend *blk, int64_t filesize, bool flat, bool compress,
i < gt_count; i++, tmp += gt_size) {
gd_buf[i] = cpu_to_le32(tmp);
}
ret = blk_co_pwrite(blk, le64_to_cpu(header.gd_offset) * BDRV_SECTOR_SIZE,
gd_buf_size, gd_buf, 0);
ret = blk_pwrite(blk, le64_to_cpu(header.gd_offset) * BDRV_SECTOR_SIZE,
gd_buf_size, gd_buf, 0);
if (ret < 0) {
error_setg(errp, QERR_IO_ERROR);
}
@@ -2909,7 +2908,7 @@ vmdk_co_check(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckMode fix)
BDRVVmdkState *s = bs->opaque;
VmdkExtent *extent = NULL;
int64_t sector_num = 0;
int64_t total_sectors = bdrv_co_nb_sectors(bs);
int64_t total_sectors = bdrv_nb_sectors(bs);
int ret;
uint64_t cluster_offset;
@@ -2939,7 +2938,7 @@ vmdk_co_check(BlockDriverState *bs, BdrvCheckResult *result, BdrvCheckMode fix)
break;
}
if (ret == VMDK_OK) {
int64_t extent_len = bdrv_co_getlength(extent->file->bs);
int64_t extent_len = bdrv_getlength(extent->file->bs);
if (extent_len < 0) {
fprintf(stderr,
"ERROR: could not get extent file length for sector %"

View File

@@ -486,8 +486,8 @@ static int vpc_reopen_prepare(BDRVReopenState *state,
* operation (the block bitmaps is updated then), 0 otherwise.
* If write is true then err must not be NULL.
*/
static int64_t coroutine_fn GRAPH_RDLOCK
get_image_offset(BlockDriverState *bs, uint64_t offset, bool write, int *err)
static inline int64_t get_image_offset(BlockDriverState *bs, uint64_t offset,
bool write, int *err)
{
BDRVVPCState *s = bs->opaque;
uint64_t bitmap_offset, block_offset;
@@ -515,7 +515,8 @@ get_image_offset(BlockDriverState *bs, uint64_t offset, bool write, int *err)
s->last_bitmap_offset = bitmap_offset;
memset(bitmap, 0xff, s->bitmap_size);
r = bdrv_co_pwrite_sync(bs->file, bitmap_offset, s->bitmap_size, bitmap, 0);
r = bdrv_pwrite_sync(bs->file, bitmap_offset, s->bitmap_size, bitmap,
0);
if (r < 0) {
*err = r;
return -2;
@@ -531,13 +532,13 @@ get_image_offset(BlockDriverState *bs, uint64_t offset, bool write, int *err)
*
* Returns 0 on success and < 0 on error
*/
static int coroutine_fn GRAPH_RDLOCK rewrite_footer(BlockDriverState *bs)
static int rewrite_footer(BlockDriverState *bs)
{
int ret;
BDRVVPCState *s = bs->opaque;
int64_t offset = s->free_data_block_offset;
ret = bdrv_co_pwrite_sync(bs->file, offset, sizeof(s->footer), &s->footer, 0);
ret = bdrv_pwrite_sync(bs->file, offset, sizeof(s->footer), &s->footer, 0);
if (ret < 0)
return ret;
@@ -551,8 +552,7 @@ static int coroutine_fn GRAPH_RDLOCK rewrite_footer(BlockDriverState *bs)
*
* Returns the sectors' offset in the image file on success and < 0 on error
*/
static int64_t coroutine_fn GRAPH_RDLOCK
alloc_block(BlockDriverState *bs, int64_t offset)
static int64_t alloc_block(BlockDriverState *bs, int64_t offset)
{
BDRVVPCState *s = bs->opaque;
int64_t bat_offset;
@@ -572,8 +572,8 @@ alloc_block(BlockDriverState *bs, int64_t offset)
/* Initialize the block's bitmap */
memset(bitmap, 0xff, s->bitmap_size);
ret = bdrv_co_pwrite_sync(bs->file, s->free_data_block_offset,
s->bitmap_size, bitmap, 0);
ret = bdrv_pwrite_sync(bs->file, s->free_data_block_offset,
s->bitmap_size, bitmap, 0);
if (ret < 0) {
return ret;
}
@@ -587,7 +587,7 @@ alloc_block(BlockDriverState *bs, int64_t offset)
/* Write BAT entry to disk */
bat_offset = s->bat_offset + (4 * index);
bat_value = cpu_to_be32(s->pagetable[index]);
ret = bdrv_co_pwrite_sync(bs->file, bat_offset, 4, &bat_value, 0);
ret = bdrv_pwrite_sync(bs->file, bat_offset, 4, &bat_value, 0);
if (ret < 0)
goto fail;
@@ -718,11 +718,11 @@ fail:
return ret;
}
static int coroutine_fn GRAPH_RDLOCK
vpc_co_block_status(BlockDriverState *bs, bool want_zero,
int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map,
BlockDriverState **file)
static int coroutine_fn vpc_co_block_status(BlockDriverState *bs,
bool want_zero,
int64_t offset, int64_t bytes,
int64_t *pnum, int64_t *map,
BlockDriverState **file)
{
BDRVVPCState *s = bs->opaque;
int64_t image_offset;
@@ -820,8 +820,8 @@ static int calculate_geometry(int64_t total_sectors, uint16_t *cyls,
return 0;
}
static int coroutine_fn create_dynamic_disk(BlockBackend *blk, VHDFooter *footer,
int64_t total_sectors)
static int create_dynamic_disk(BlockBackend *blk, VHDFooter *footer,
int64_t total_sectors)
{
VHDDynDiskHeader dyndisk_header;
uint8_t bat_sector[512];
@@ -834,13 +834,13 @@ static int coroutine_fn create_dynamic_disk(BlockBackend *blk, VHDFooter *footer
block_size = 0x200000;
num_bat_entries = DIV_ROUND_UP(total_sectors, block_size / 512);
ret = blk_co_pwrite(blk, offset, sizeof(*footer), footer, 0);
ret = blk_pwrite(blk, offset, sizeof(*footer), footer, 0);
if (ret < 0) {
goto fail;
}
offset = 1536 + ((num_bat_entries * 4 + 511) & ~511);
ret = blk_co_pwrite(blk, offset, sizeof(*footer), footer, 0);
ret = blk_pwrite(blk, offset, sizeof(*footer), footer, 0);
if (ret < 0) {
goto fail;
}
@@ -850,7 +850,7 @@ static int coroutine_fn create_dynamic_disk(BlockBackend *blk, VHDFooter *footer
memset(bat_sector, 0xFF, 512);
for (i = 0; i < DIV_ROUND_UP(num_bat_entries * 4, 512); i++) {
ret = blk_co_pwrite(blk, offset, 512, bat_sector, 0);
ret = blk_pwrite(blk, offset, 512, bat_sector, 0);
if (ret < 0) {
goto fail;
}
@@ -878,7 +878,7 @@ static int coroutine_fn create_dynamic_disk(BlockBackend *blk, VHDFooter *footer
/* Write the header */
offset = 512;
ret = blk_co_pwrite(blk, offset, sizeof(dyndisk_header), &dyndisk_header, 0);
ret = blk_pwrite(blk, offset, sizeof(dyndisk_header), &dyndisk_header, 0);
if (ret < 0) {
goto fail;
}
@@ -888,21 +888,21 @@ static int coroutine_fn create_dynamic_disk(BlockBackend *blk, VHDFooter *footer
return ret;
}
static int coroutine_fn create_fixed_disk(BlockBackend *blk, VHDFooter *footer,
int64_t total_size, Error **errp)
static int create_fixed_disk(BlockBackend *blk, VHDFooter *footer,
int64_t total_size, Error **errp)
{
int ret;
/* Add footer to total size */
total_size += sizeof(*footer);
ret = blk_co_truncate(blk, total_size, false, PREALLOC_MODE_OFF, 0, errp);
ret = blk_truncate(blk, total_size, false, PREALLOC_MODE_OFF, 0, errp);
if (ret < 0) {
return ret;
}
ret = blk_co_pwrite(blk, total_size - sizeof(*footer), sizeof(*footer),
footer, 0);
ret = blk_pwrite(blk, total_size - sizeof(*footer), sizeof(*footer),
footer, 0);
if (ret < 0) {
error_setg_errno(errp, -ret, "Unable to write VHD header");
return ret;

View File

@@ -230,27 +230,20 @@ int block_job_add_bdrv(BlockJob *job, const char *name, BlockDriverState *bs,
uint64_t perm, uint64_t shared_perm, Error **errp)
{
BdrvChild *c;
AioContext *ctx = bdrv_get_aio_context(bs);
bool need_context_ops;
GLOBAL_STATE_CODE();
bdrv_ref(bs);
need_context_ops = ctx != job->job.aio_context;
need_context_ops = bdrv_get_aio_context(bs) != job->job.aio_context;
if (need_context_ops) {
if (job->job.aio_context != qemu_get_aio_context()) {
aio_context_release(job->job.aio_context);
}
aio_context_acquire(ctx);
if (need_context_ops && job->job.aio_context != qemu_get_aio_context()) {
aio_context_release(job->job.aio_context);
}
c = bdrv_root_attach_child(bs, name, &child_job, 0, perm, shared_perm, job,
errp);
if (need_context_ops) {
aio_context_release(ctx);
if (job->job.aio_context != qemu_get_aio_context()) {
aio_context_acquire(job->job.aio_context);
}
if (need_context_ops && job->job.aio_context != qemu_get_aio_context()) {
aio_context_acquire(job->job.aio_context);
}
if (c == NULL) {
return -EPERM;

View File

@@ -473,6 +473,10 @@ int main(int argc, char **argv)
target_environ = envlist_to_environ(envlist, NULL);
envlist_free(envlist);
if (reserved_va) {
mmap_next_start = reserved_va + 1;
}
{
Error *err = NULL;
if (seed_optarg != NULL) {
@@ -490,49 +494,7 @@ int main(int argc, char **argv)
* Now that page sizes are configured we can do
* proper page alignment for guest_base.
*/
if (have_guest_base) {
if (guest_base & ~qemu_host_page_mask) {
error_report("Selected guest base not host page aligned");
exit(1);
}
}
/*
* If reserving host virtual address space, do so now.
* Combined with '-B', ensure that the chosen range is free.
*/
if (reserved_va) {
void *p;
if (have_guest_base) {
p = mmap((void *)guest_base, reserved_va + 1, PROT_NONE,
MAP_ANON | MAP_PRIVATE | MAP_FIXED | MAP_EXCL, -1, 0);
} else {
p = mmap(NULL, reserved_va + 1, PROT_NONE,
MAP_ANON | MAP_PRIVATE, -1, 0);
}
if (p == MAP_FAILED) {
const char *err = strerror(errno);
char *sz = size_to_str(reserved_va + 1);
if (have_guest_base) {
error_report("Cannot allocate %s bytes at -B %p for guest "
"address space: %s", sz, (void *)guest_base, err);
} else {
error_report("Cannot allocate %s bytes for guest "
"address space: %s", sz, err);
}
exit(1);
}
guest_base = (uintptr_t)p;
have_guest_base = true;
/* Ensure that mmap_next_start is within range. */
if (reserved_va <= mmap_next_start) {
mmap_next_start = (reserved_va / 4 * 3)
& TARGET_PAGE_MASK & qemu_host_page_mask;
}
}
guest_base = HOST_PAGE_ALIGN(guest_base);
if (loader_exec(filename, argv + optind, target_environ, regs, info,
&bprm) != 0) {

View File

@@ -32,7 +32,6 @@ void mmap_lock(void)
void mmap_unlock(void)
{
assert(mmap_lock_count > 0);
if (--mmap_lock_count == 0) {
pthread_mutex_unlock(&mmap_mutex);
}
@@ -214,6 +213,8 @@ static int mmap_frag(abi_ulong real_start,
#endif
abi_ulong mmap_next_start = TASK_UNMAPPED_BASE;
unsigned long last_brk;
/*
* Subroutine of mmap_find_vma, used when we have pre-allocated a chunk of guest
* address space.
@@ -221,16 +222,50 @@ abi_ulong mmap_next_start = TASK_UNMAPPED_BASE;
static abi_ulong mmap_find_vma_reserved(abi_ulong start, abi_ulong size,
abi_ulong alignment)
{
abi_ulong ret;
abi_ulong addr;
abi_ulong end_addr;
int prot;
int looped = 0;
ret = page_find_range_empty(start, reserved_va, size, alignment);
if (ret == -1 && start > TARGET_PAGE_SIZE) {
/* Restart at the beginning of the address space. */
ret = page_find_range_empty(TARGET_PAGE_SIZE, start - 1,
size, alignment);
if (size > reserved_va) {
return (abi_ulong)-1;
}
return ret;
size = HOST_PAGE_ALIGN(size) + alignment;
end_addr = start + size;
if (end_addr > reserved_va) {
end_addr = reserved_va + 1;
}
addr = end_addr - qemu_host_page_size;
while (1) {
if (addr > end_addr) {
if (looped) {
return (abi_ulong)-1;
}
end_addr = reserved_va + 1;
addr = end_addr - qemu_host_page_size;
looped = 1;
continue;
}
prot = page_get_flags(addr);
if (prot) {
end_addr = addr;
}
if (end_addr - addr >= size) {
break;
}
addr -= qemu_host_page_size;
}
if (start == mmap_next_start) {
mmap_next_start = addr;
}
/* addr is sufficiently low to align it up */
if (alignment != 0) {
addr = (addr + alignment) & ~(alignment - 1);
}
return addr;
}
/*
@@ -258,8 +293,7 @@ static abi_ulong mmap_find_vma_aligned(abi_ulong start, abi_ulong size,
if (reserved_va) {
return mmap_find_vma_reserved(start, size,
(alignment != 0 ? 1 << alignment :
MAX(qemu_host_page_size, TARGET_PAGE_SIZE)));
(alignment != 0 ? 1 << alignment : 0));
}
addr = start;
@@ -575,7 +609,7 @@ abi_long target_mmap(abi_ulong start, abi_ulong len, int prot,
}
/* Reject the mapping if any page within the range is mapped */
if ((flags & MAP_EXCL) && !page_check_range_empty(start, end - 1)) {
if ((flags & MAP_EXCL) && page_check_range(start, len, 0) < 0) {
errno = EINVAL;
goto fail;
}

View File

@@ -232,6 +232,7 @@ abi_long target_mremap(abi_ulong old_addr, abi_ulong old_size,
abi_ulong new_size, unsigned long flags,
abi_ulong new_addr);
int target_msync(abi_ulong start, abi_ulong len, int flags);
extern unsigned long last_brk;
extern abi_ulong mmap_next_start;
abi_ulong mmap_find_vma(abi_ulong start, abi_ulong size);
void TSA_NO_TSA mmap_fork_start(void);
@@ -266,7 +267,7 @@ abi_long do_freebsd_sysarch(void *cpu_env, abi_long arg1, abi_long arg2);
static inline bool access_ok(int type, abi_ulong addr, abi_ulong size)
{
return page_check_range((target_ulong)addr, size, type);
return page_check_range((target_ulong)addr, size, type) == 0;
}
/*

View File

@@ -227,9 +227,7 @@ type safe_##name(type1 arg1, type2 arg2, type3 arg3, type4 arg4, \
}
/* So far all target and host bitmasks are the same */
#undef target_to_host_bitmask
#define target_to_host_bitmask(x, tbl) (x)
#undef host_to_target_bitmask
#define host_to_target_bitmask(x, tbl) (x)
#endif /* SYSCALL_DEFS_H */

View File

@@ -106,27 +106,11 @@ static void pty_chr_update_read_handler(Chardev *chr)
static int char_pty_chr_write(Chardev *chr, const uint8_t *buf, int len)
{
PtyChardev *s = PTY_CHARDEV(chr);
GPollFD pfd;
int rc;
if (s->connected) {
return io_channel_send(s->ioc, buf, len);
if (!s->connected) {
return len;
}
/*
* The other side might already be re-connected, but the timer might
* not have fired yet. So let's check here whether we can write again:
*/
pfd.fd = QIO_CHANNEL_FILE(s->ioc)->fd;
pfd.events = G_IO_OUT;
pfd.revents = 0;
rc = RETRY_ON_EINTR(g_poll(&pfd, 1, 0));
g_assert(rc >= 0);
if (!(pfd.revents & G_IO_HUP) && (pfd.revents & G_IO_OUT)) {
io_channel_send(s->ioc, buf, len);
}
return len;
return io_channel_send(s->ioc, buf, len);
}
static GSource *pty_chr_add_watch(Chardev *chr, GIOCondition cond)

View File

@@ -742,12 +742,8 @@ static void tcp_chr_websock_handshake(QIOTask *task, gpointer user_data)
{
Chardev *chr = user_data;
SocketChardev *s = user_data;
Error *err = NULL;
if (qio_task_propagate_error(task, &err)) {
error_reportf_err(err,
"websock handshake of character device %s failed: ",
chr->label);
if (qio_task_propagate_error(task, NULL)) {
tcp_chr_disconnect(chr);
} else {
if (s->do_telnetopt) {
@@ -782,12 +778,8 @@ static void tcp_chr_tls_handshake(QIOTask *task,
{
Chardev *chr = user_data;
SocketChardev *s = user_data;
Error *err = NULL;
if (qio_task_propagate_error(task, &err)) {
error_reportf_err(err,
"TLS handshake of character device %s failed: ",
chr->label);
if (qio_task_propagate_error(task, NULL)) {
tcp_chr_disconnect(chr);
} else {
if (s->is_websock) {

View File

@@ -190,7 +190,7 @@ static void qemu_chr_open_stdio(Chardev *chr,
}
}
dwMode |= ENABLE_LINE_INPUT | ENABLE_VIRTUAL_TERMINAL_INPUT;
dwMode |= ENABLE_LINE_INPUT;
if (is_console) {
/* set the terminal in raw mode */

View File

@@ -7,7 +7,6 @@
#CONFIG_VFIO_CCW=n
#CONFIG_VIRTIO_PCI=n
#CONFIG_WDT_DIAG288=n
#CONFIG_PCIE_DEVICES=n
# Boards:
#

173
configure vendored
View File

@@ -451,11 +451,7 @@ elif check_define __s390__ ; then
cpu="s390"
fi
elif check_define __riscv ; then
if check_define _LP64 ; then
cpu="riscv64"
else
cpu="riscv32"
fi
cpu="riscv"
elif check_define __arm__ ; then
cpu="arm"
elif check_define __aarch64__ ; then
@@ -469,118 +465,49 @@ else
echo "WARNING: unrecognized host CPU, proceeding with 'uname -m' output '$cpu'"
fi
# Normalise host CPU name to the values used by Meson cross files and in source
# directories, and set multilib cflags. The canonicalization isn't really
# necessary, because the architectures that we check for should not hit the
# 'uname -m' case, but better safe than sorry in case --cpu= is used.
#
# Normalise host CPU name and set multilib cflags. The canonicalization
# isn't really necessary, because the architectures that we check for
# should not hit the 'uname -m' case, but better safe than sorry.
# Note that this case should only have supported host CPUs, not guests.
# Please keep it sorted and synchronized with meson.build's host_arch.
host_arch=
linux_arch=
case "$cpu" in
aarch64)
host_arch=aarch64
linux_arch=arm64
;;
armv*b|armv*l|arm)
cpu=arm
host_arch=arm
linux_arch=arm
;;
cpu="arm" ;;
i386|i486|i586|i686)
cpu="i386"
host_arch=i386
linux_arch=x86
CPU_CFLAGS="-m32"
;;
loongarch*)
cpu=loongarch64
host_arch=loongarch64
;;
mips64*)
cpu=mips64
host_arch=mips
linux_arch=mips
;;
mips*)
cpu=mips
host_arch=mips
linux_arch=mips
;;
ppc)
host_arch=ppc
linux_arch=powerpc
CPU_CFLAGS="-m32"
;;
ppc64)
host_arch=ppc64
linux_arch=powerpc
CPU_CFLAGS="-m64 -mbig-endian"
;;
ppc64le)
cpu=ppc64
host_arch=ppc64
linux_arch=powerpc
CPU_CFLAGS="-m64 -mlittle-endian"
;;
riscv32 | riscv64)
host_arch=riscv
linux_arch=riscv
;;
s390)
linux_arch=s390
CPU_CFLAGS="-m31"
;;
s390x)
host_arch=s390x
linux_arch=s390
CPU_CFLAGS="-m64"
;;
sparc|sun4[cdmuv])
cpu=sparc
CPU_CFLAGS="-m32 -mv8plus -mcpu=ultrasparc"
;;
sparc64)
host_arch=sparc64
CPU_CFLAGS="-m64 -mcpu=ultrasparc"
;;
CPU_CFLAGS="-m32" ;;
x32)
cpu="x86_64"
host_arch=x86_64
linux_arch=x86
CPU_CFLAGS="-mx32"
;;
CPU_CFLAGS="-mx32" ;;
x86_64|amd64)
cpu="x86_64"
host_arch=x86_64
linux_arch=x86
# ??? Only extremely old AMD cpus do not have cmpxchg16b.
# If we truly care, we should simply detect this case at
# runtime and generate the fallback to serial emulation.
CPU_CFLAGS="-m64 -mcx16"
;;
esac
CPU_CFLAGS="-m64 -mcx16" ;;
if test -n "$host_arch" && {
! test -d "$source_path/linux-user/include/host/$host_arch" ||
! test -d "$source_path/common-user/host/$host_arch"; }; then
error_exit "linux-user/include/host/$host_arch does not exist." \
"This is a bug in the configure script, please report it."
fi
if test -n "$linux_arch" && ! test -d "$source_path/linux-headers/asm-$linux_arch"; then
error_exit "linux-headers/asm-$linux_arch does not exist." \
"This is a bug in the configure script, please report it."
fi
mips*)
cpu="mips" ;;
ppc)
CPU_CFLAGS="-m32" ;;
ppc64)
CPU_CFLAGS="-m64 -mbig-endian" ;;
ppc64le)
cpu="ppc64"
CPU_CFLAGS="-m64 -mlittle-endian" ;;
s390)
CPU_CFLAGS="-m31" ;;
s390x)
CPU_CFLAGS="-m64" ;;
sparc|sun4[cdmuv])
cpu="sparc"
CPU_CFLAGS="-m32 -mv8plus -mcpu=ultrasparc" ;;
sparc64)
CPU_CFLAGS="-m64 -mcpu=ultrasparc" ;;
esac
check_py_version() {
# We require python >= 3.7.
@@ -826,9 +753,6 @@ for opt do
# everything else has the same name in configure and meson
--*) meson_option_parse "$opt" "$optarg"
;;
# Pass through -Dxxxx options to meson
-D*) meson_options="$meson_options $opt"
;;
esac
done
@@ -872,7 +796,7 @@ default_target_list=""
mak_wilds=""
if [ "$linux_user" != no ]; then
if [ "$targetos" = linux ] && [ -n "$host_arch" ]; then
if [ "$targetos" = linux ] && [ -d "$source_path/linux-user/include/host/$cpu" ]; then
linux_user=yes
elif [ "$linux_user" = yes ]; then
error_exit "linux-user not supported on this architecture"
@@ -918,7 +842,6 @@ $(echo Available targets: $default_target_list | \
--target-list-exclude=LIST exclude a set of targets from the default target-list
Advanced options (experts only):
-Dmesonoptname=val passthrough option to meson unmodified
--cross-prefix=PREFIX use PREFIX for compile tools, PREFIX can be blank [$cross_prefix]
--cc=CC use C compiler CC [$cc]
--host-cc=CC use C compiler CC [$host_cc] for code run at
@@ -1777,9 +1700,37 @@ echo "PKG_CONFIG=${pkg_config}" >> $config_host_mak
echo "CC=$cc" >> $config_host_mak
echo "EXESUF=$EXESUF" >> $config_host_mak
# use included Linux headers for KVM architectures
if test "$linux" = "yes" && test -n "$linux_arch"; then
symlink "$source_path/linux-headers/asm-$linux_arch" linux-headers/asm
# use included Linux headers
if test "$linux" = "yes" ; then
mkdir -p linux-headers
case "$cpu" in
i386|x86_64)
linux_arch=x86
;;
ppc|ppc64)
linux_arch=powerpc
;;
s390x)
linux_arch=s390
;;
aarch64)
linux_arch=arm64
;;
loongarch*)
linux_arch=loongarch
;;
mips64)
linux_arch=mips
;;
*)
# For most CPUs the kernel architecture name and QEMU CPU name match.
linux_arch="$cpu"
;;
esac
# For non-KVM architectures we will not have asm headers
if [ -e "$source_path/linux-headers/asm-$linux_arch" ]; then
symlink "$source_path/linux-headers/asm-$linux_arch" linux-headers/asm
fi
fi
for target in $target_list; do

View File

@@ -316,11 +316,6 @@ static int fill_context(KDDEBUGGER_DATA64 *kdbg,
return 1;
}
if (!Prcb) {
eprintf("Context for CPU #%d is missing\n", i);
continue;
}
if (va_space_rw(vs, Prcb + kdbg->OffsetPrcbContext,
&Context, sizeof(Context), 0)) {
eprintf("Failed to read CPU #%d ContextFrame location\n", i);

View File

@@ -772,7 +772,7 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
for (i = 0; i < argc; i++) {
char *opt = argv[i];
g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
g_autofree char **tokens = g_strsplit(opt, "=", 2);
if (g_strcmp0(tokens[0], "iblksize") == 0) {
l1_iblksize = STRTOLL(tokens[1]);

View File

@@ -148,7 +148,7 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
int argc, char **argv)
{
for (int i = 0; i < argc; i++) {
g_auto(GStrv) tokens = g_strsplit(argv[i], "=", 2);
g_autofree char **tokens = g_strsplit(argv[i], "=", 2);
if (g_strcmp0(tokens[0], "filename") == 0) {
file_name = g_strdup(tokens[1]);
}

View File

@@ -227,7 +227,7 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
for (int i = 0; i < argc; i++) {
char *opt = argv[i];
g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
g_autofree char **tokens = g_strsplit(opt, "=", 2);
if (g_strcmp0(tokens[0], "ifilter") == 0) {
parse_insn_match(tokens[1]);
} else if (g_strcmp0(tokens[0], "afilter") == 0) {

View File

@@ -135,7 +135,7 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
{
for (int i = 0; i < argc; i++) {
char *opt = argv[i];
g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
g_autofree char **tokens = g_strsplit(opt, "=", 2);
if (g_strcmp0(tokens[0], "inline") == 0) {
if (!qemu_plugin_bool_parse(tokens[0], tokens[1], &do_inline)) {
fprintf(stderr, "boolean argument parsing failed: %s\n", opt);

View File

@@ -169,7 +169,7 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
for (i = 0; i < argc; i++) {
char *opt = argv[i];
g_auto(GStrv) tokens = g_strsplit(opt, "=", -1);
g_autofree char **tokens = g_strsplit(opt, "=", -1);
if (g_strcmp0(tokens[0], "sortby") == 0) {
if (g_strcmp0(tokens[1], "reads") == 0) {

View File

@@ -333,7 +333,7 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
for (i = 0; i < argc; i++) {
char *p = argv[i];
g_auto(GStrv) tokens = g_strsplit(p, "=", -1);
g_autofree char **tokens = g_strsplit(p, "=", -1);
if (g_strcmp0(tokens[0], "inline") == 0) {
if (!qemu_plugin_bool_parse(tokens[0], tokens[1], &do_inline)) {
fprintf(stderr, "boolean argument parsing failed: %s\n", p);

View File

@@ -263,7 +263,7 @@ int qemu_plugin_install(qemu_plugin_id_t id, const qemu_info_t *info,
for (i = 0; i < argc; i++) {
char *opt = argv[i];
g_auto(GStrv) tokens = g_strsplit(opt, "=", 2);
g_autofree char **tokens = g_strsplit(opt, "=", 2);
if (g_strcmp0(tokens[0], "track") == 0) {
if (g_strcmp0(tokens[1], "read") == 0) {

View File

@@ -130,7 +130,7 @@ static void report_divergance(ExecState *us, ExecState *them)
}
}
divergence_log = g_slist_prepend(divergence_log,
g_memdup2(&divrec, sizeof(divrec)));
g_memdup(&divrec, sizeof(divrec)));
/* Output short log entry of going out of sync... */
if (verbose || divrec.distance == 1 || diverged) {
@@ -323,7 +323,7 @@ QEMU_PLUGIN_EXPORT int qemu_plugin_install(qemu_plugin_id_t id,
for (i = 0; i < argc; i++) {
char *p = argv[i];
g_auto(GStrv) tokens = g_strsplit(p, "=", 2);
g_autofree char **tokens = g_strsplit(p, "=", 2);
if (g_strcmp0(tokens[0], "verbose") == 0) {
if (!qemu_plugin_bool_parse(tokens[0], tokens[1], &verbose)) {

View File

@@ -303,53 +303,6 @@ vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd)
cmd->state = VG_CMD_STATE_PENDING;
}
static gboolean
get_edid_cb(gint fd, GIOCondition condition, gpointer user_data)
{
struct virtio_gpu_resp_edid resp_edid;
VuGpu *vg = user_data;
struct virtio_gpu_ctrl_command *cmd = QTAILQ_LAST(&vg->fenceq);
g_debug("get edid cb");
assert(cmd->cmd_hdr.type == VIRTIO_GPU_CMD_GET_EDID);
if (!vg_recv_msg(vg, VHOST_USER_GPU_GET_EDID,
sizeof(resp_edid), &resp_edid)) {
return G_SOURCE_CONTINUE;
}
QTAILQ_REMOVE(&vg->fenceq, cmd, next);
vg_ctrl_response(vg, cmd, &resp_edid.hdr, sizeof(resp_edid));
vg->wait_in = 0;
vg_handle_ctrl(&vg->dev.parent, 0);
return G_SOURCE_REMOVE;
}
void
vg_get_edid(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd)
{
struct virtio_gpu_cmd_get_edid get_edid;
VUGPU_FILL_CMD(get_edid);
virtio_gpu_bswap_32(&get_edid, sizeof(get_edid));
VhostUserGpuMsg msg = {
.request = VHOST_USER_GPU_GET_EDID,
.size = sizeof(VhostUserGpuEdidRequest),
.payload.edid_req = {
.scanout_id = get_edid.scanout,
},
};
assert(vg->wait_in == 0);
vg_send_msg(vg, &msg, -1);
vg->wait_in = g_unix_fd_add(vg->sock_fd, G_IO_IN | G_IO_HUP,
get_edid_cb, vg);
cmd->state = VG_CMD_STATE_PENDING;
}
static void
vg_resource_create_2d(VuGpu *g,
struct virtio_gpu_ctrl_command *cmd)
@@ -884,9 +837,8 @@ vg_process_cmd(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd)
case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
vg_resource_detach_backing(vg, cmd);
break;
case VIRTIO_GPU_CMD_GET_EDID:
vg_get_edid(vg, cmd);
break;
/* case VIRTIO_GPU_CMD_GET_EDID: */
/* break */
default:
g_warning("TODO handle ctrl %x\n", cmd->cmd_hdr.type);
cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
@@ -1070,36 +1022,26 @@ vg_queue_set_started(VuDev *dev, int qidx, bool started)
static gboolean
protocol_features_cb(gint fd, GIOCondition condition, gpointer user_data)
{
const uint64_t protocol_edid = (1 << VHOST_USER_GPU_PROTOCOL_F_EDID);
VuGpu *g = user_data;
uint64_t protocol_features;
uint64_t u64;
VhostUserGpuMsg msg = {
.request = VHOST_USER_GPU_GET_PROTOCOL_FEATURES
};
if (!vg_recv_msg(g, msg.request,
sizeof(protocol_features), &protocol_features)) {
if (!vg_recv_msg(g, msg.request, sizeof(u64), &u64)) {
return G_SOURCE_CONTINUE;
}
protocol_features &= protocol_edid;
msg = (VhostUserGpuMsg) {
.request = VHOST_USER_GPU_SET_PROTOCOL_FEATURES,
.size = sizeof(uint64_t),
.payload.u64 = protocol_features,
.payload.u64 = 0
};
vg_send_msg(g, &msg, -1);
g->wait_in = 0;
vg_handle_ctrl(&g->dev.parent, 0);
if (g->edid_inited && !(protocol_features & protocol_edid)) {
g_printerr("EDID feature set by the frontend but it does not support "
"the EDID vhost-user-gpu protocol.\n");
exit(EXIT_FAILURE);
}
return G_SOURCE_REMOVE;
}
@@ -1107,7 +1049,7 @@ static void
set_gpu_protocol_features(VuGpu *g)
{
VhostUserGpuMsg msg = {
.request = VHOST_USER_GPU_GET_PROTOCOL_FEATURES,
.request = VHOST_USER_GPU_GET_PROTOCOL_FEATURES
};
vg_send_msg(g, &msg, -1);
@@ -1144,7 +1086,6 @@ vg_get_features(VuDev *dev)
if (opt_virgl) {
features |= 1 << VIRTIO_GPU_F_VIRGL;
}
features |= 1 << VIRTIO_GPU_F_EDID;
return features;
}
@@ -1162,8 +1103,6 @@ vg_set_features(VuDev *dev, uint64_t features)
g->virgl_inited = true;
}
g->edid_inited = !!(features & (1 << VIRTIO_GPU_F_EDID));
g->virgl = virgl;
}

View File

@@ -495,9 +495,6 @@ void vg_virgl_process_cmd(VuGpu *g, struct virtio_gpu_ctrl_command *cmd)
case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
vg_get_display_info(g, cmd);
break;
case VIRTIO_GPU_CMD_GET_EDID:
vg_get_edid(g, cmd);
break;
default:
g_debug("TODO handle ctrl %x\n", cmd->cmd_hdr.type);
cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;

View File

@@ -36,7 +36,6 @@ typedef enum VhostUserGpuRequest {
VHOST_USER_GPU_UPDATE,
VHOST_USER_GPU_DMABUF_SCANOUT,
VHOST_USER_GPU_DMABUF_UPDATE,
VHOST_USER_GPU_GET_EDID,
} VhostUserGpuRequest;
typedef struct VhostUserGpuDisplayInfoReply {
@@ -84,10 +83,6 @@ typedef struct VhostUserGpuDMABUFScanout {
int fd_drm_fourcc;
} QEMU_PACKED VhostUserGpuDMABUFScanout;
typedef struct VhostUserGpuEdidRequest {
uint32_t scanout_id;
} QEMU_PACKED VhostUserGpuEdidRequest;
typedef struct VhostUserGpuMsg {
uint32_t request; /* VhostUserGpuRequest */
uint32_t flags;
@@ -98,8 +93,6 @@ typedef struct VhostUserGpuMsg {
VhostUserGpuScanout scanout;
VhostUserGpuUpdate update;
VhostUserGpuDMABUFScanout dmabuf_scanout;
VhostUserGpuEdidRequest edid_req;
struct virtio_gpu_resp_edid resp_edid;
struct virtio_gpu_resp_display_info display_info;
uint64_t u64;
} payload;
@@ -111,8 +104,6 @@ static VhostUserGpuMsg m __attribute__ ((unused));
#define VHOST_USER_GPU_MSG_FLAG_REPLY 0x4
#define VHOST_USER_GPU_PROTOCOL_F_EDID 0
struct virtio_gpu_scanout {
uint32_t width, height;
int x, y;
@@ -131,7 +122,6 @@ typedef struct VuGpu {
bool virgl;
bool virgl_inited;
bool edid_inited;
uint32_t inflight;
struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
@@ -181,7 +171,6 @@ int vg_create_mapping_iov(VuGpu *g,
struct iovec **iov);
void vg_cleanup_mapping_iov(VuGpu *g, struct iovec *iov, uint32_t count);
void vg_get_display_info(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd);
void vg_get_edid(VuGpu *vg, struct virtio_gpu_ctrl_command *cmd);
void vg_wait_ok(VuGpu *g);

View File

@@ -28,10 +28,7 @@
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "qemu/osdep.h"
#include "qemu/bswap.h"
#include "qemu/bitops.h"
#include "crypto/aes.h"
#include "crypto/aes-round.h"
typedef uint32_t u32;
typedef uint8_t u8;
@@ -111,152 +108,278 @@ const uint8_t AES_isbox[256] = {
0xE1, 0x69, 0x14, 0x63, 0x55, 0x21, 0x0C, 0x7D,
};
/* AES ShiftRows, for complete unrolling. */
#define AES_SH(X) (((X) * 5) & 15)
/* AES InvShiftRows, for complete unrolling. */
#define AES_ISH(X) (((X) * 13) & 15)
/*
* MixColumns lookup table, for use with rot32.
*/
static const uint32_t AES_mc_rot[256] = {
0x00000000, 0x03010102, 0x06020204, 0x05030306,
0x0c040408, 0x0f05050a, 0x0a06060c, 0x0907070e,
0x18080810, 0x1b090912, 0x1e0a0a14, 0x1d0b0b16,
0x140c0c18, 0x170d0d1a, 0x120e0e1c, 0x110f0f1e,
0x30101020, 0x33111122, 0x36121224, 0x35131326,
0x3c141428, 0x3f15152a, 0x3a16162c, 0x3917172e,
0x28181830, 0x2b191932, 0x2e1a1a34, 0x2d1b1b36,
0x241c1c38, 0x271d1d3a, 0x221e1e3c, 0x211f1f3e,
0x60202040, 0x63212142, 0x66222244, 0x65232346,
0x6c242448, 0x6f25254a, 0x6a26264c, 0x6927274e,
0x78282850, 0x7b292952, 0x7e2a2a54, 0x7d2b2b56,
0x742c2c58, 0x772d2d5a, 0x722e2e5c, 0x712f2f5e,
0x50303060, 0x53313162, 0x56323264, 0x55333366,
0x5c343468, 0x5f35356a, 0x5a36366c, 0x5937376e,
0x48383870, 0x4b393972, 0x4e3a3a74, 0x4d3b3b76,
0x443c3c78, 0x473d3d7a, 0x423e3e7c, 0x413f3f7e,
0xc0404080, 0xc3414182, 0xc6424284, 0xc5434386,
0xcc444488, 0xcf45458a, 0xca46468c, 0xc947478e,
0xd8484890, 0xdb494992, 0xde4a4a94, 0xdd4b4b96,
0xd44c4c98, 0xd74d4d9a, 0xd24e4e9c, 0xd14f4f9e,
0xf05050a0, 0xf35151a2, 0xf65252a4, 0xf55353a6,
0xfc5454a8, 0xff5555aa, 0xfa5656ac, 0xf95757ae,
0xe85858b0, 0xeb5959b2, 0xee5a5ab4, 0xed5b5bb6,
0xe45c5cb8, 0xe75d5dba, 0xe25e5ebc, 0xe15f5fbe,
0xa06060c0, 0xa36161c2, 0xa66262c4, 0xa56363c6,
0xac6464c8, 0xaf6565ca, 0xaa6666cc, 0xa96767ce,
0xb86868d0, 0xbb6969d2, 0xbe6a6ad4, 0xbd6b6bd6,
0xb46c6cd8, 0xb76d6dda, 0xb26e6edc, 0xb16f6fde,
0x907070e0, 0x937171e2, 0x967272e4, 0x957373e6,
0x9c7474e8, 0x9f7575ea, 0x9a7676ec, 0x997777ee,
0x887878f0, 0x8b7979f2, 0x8e7a7af4, 0x8d7b7bf6,
0x847c7cf8, 0x877d7dfa, 0x827e7efc, 0x817f7ffe,
0x9b80801b, 0x98818119, 0x9d82821f, 0x9e83831d,
0x97848413, 0x94858511, 0x91868617, 0x92878715,
0x8388880b, 0x80898909, 0x858a8a0f, 0x868b8b0d,
0x8f8c8c03, 0x8c8d8d01, 0x898e8e07, 0x8a8f8f05,
0xab90903b, 0xa8919139, 0xad92923f, 0xae93933d,
0xa7949433, 0xa4959531, 0xa1969637, 0xa2979735,
0xb398982b, 0xb0999929, 0xb59a9a2f, 0xb69b9b2d,
0xbf9c9c23, 0xbc9d9d21, 0xb99e9e27, 0xba9f9f25,
0xfba0a05b, 0xf8a1a159, 0xfda2a25f, 0xfea3a35d,
0xf7a4a453, 0xf4a5a551, 0xf1a6a657, 0xf2a7a755,
0xe3a8a84b, 0xe0a9a949, 0xe5aaaa4f, 0xe6abab4d,
0xefacac43, 0xecadad41, 0xe9aeae47, 0xeaafaf45,
0xcbb0b07b, 0xc8b1b179, 0xcdb2b27f, 0xceb3b37d,
0xc7b4b473, 0xc4b5b571, 0xc1b6b677, 0xc2b7b775,
0xd3b8b86b, 0xd0b9b969, 0xd5baba6f, 0xd6bbbb6d,
0xdfbcbc63, 0xdcbdbd61, 0xd9bebe67, 0xdabfbf65,
0x5bc0c09b, 0x58c1c199, 0x5dc2c29f, 0x5ec3c39d,
0x57c4c493, 0x54c5c591, 0x51c6c697, 0x52c7c795,
0x43c8c88b, 0x40c9c989, 0x45caca8f, 0x46cbcb8d,
0x4fcccc83, 0x4ccdcd81, 0x49cece87, 0x4acfcf85,
0x6bd0d0bb, 0x68d1d1b9, 0x6dd2d2bf, 0x6ed3d3bd,
0x67d4d4b3, 0x64d5d5b1, 0x61d6d6b7, 0x62d7d7b5,
0x73d8d8ab, 0x70d9d9a9, 0x75dadaaf, 0x76dbdbad,
0x7fdcdca3, 0x7cdddda1, 0x79dedea7, 0x7adfdfa5,
0x3be0e0db, 0x38e1e1d9, 0x3de2e2df, 0x3ee3e3dd,
0x37e4e4d3, 0x34e5e5d1, 0x31e6e6d7, 0x32e7e7d5,
0x23e8e8cb, 0x20e9e9c9, 0x25eaeacf, 0x26ebebcd,
0x2fececc3, 0x2cededc1, 0x29eeeec7, 0x2aefefc5,
0x0bf0f0fb, 0x08f1f1f9, 0x0df2f2ff, 0x0ef3f3fd,
0x07f4f4f3, 0x04f5f5f1, 0x01f6f6f7, 0x02f7f7f5,
0x13f8f8eb, 0x10f9f9e9, 0x15fafaef, 0x16fbfbed,
0x1ffcfce3, 0x1cfdfde1, 0x19fefee7, 0x1affffe5,
const uint8_t AES_shifts[16] = {
0, 5, 10, 15, 4, 9, 14, 3, 8, 13, 2, 7, 12, 1, 6, 11
};
/*
* Inverse MixColumns lookup table, for use with rot32.
*/
static const uint32_t AES_imc_rot[256] = {
0x00000000, 0x0b0d090e, 0x161a121c, 0x1d171b12,
0x2c342438, 0x27392d36, 0x3a2e3624, 0x31233f2a,
0x58684870, 0x5365417e, 0x4e725a6c, 0x457f5362,
0x745c6c48, 0x7f516546, 0x62467e54, 0x694b775a,
0xb0d090e0, 0xbbdd99ee, 0xa6ca82fc, 0xadc78bf2,
0x9ce4b4d8, 0x97e9bdd6, 0x8afea6c4, 0x81f3afca,
0xe8b8d890, 0xe3b5d19e, 0xfea2ca8c, 0xf5afc382,
0xc48cfca8, 0xcf81f5a6, 0xd296eeb4, 0xd99be7ba,
0x7bbb3bdb, 0x70b632d5, 0x6da129c7, 0x66ac20c9,
0x578f1fe3, 0x5c8216ed, 0x41950dff, 0x4a9804f1,
0x23d373ab, 0x28de7aa5, 0x35c961b7, 0x3ec468b9,
0x0fe75793, 0x04ea5e9d, 0x19fd458f, 0x12f04c81,
0xcb6bab3b, 0xc066a235, 0xdd71b927, 0xd67cb029,
0xe75f8f03, 0xec52860d, 0xf1459d1f, 0xfa489411,
0x9303e34b, 0x980eea45, 0x8519f157, 0x8e14f859,
0xbf37c773, 0xb43ace7d, 0xa92dd56f, 0xa220dc61,
0xf66d76ad, 0xfd607fa3, 0xe07764b1, 0xeb7a6dbf,
0xda595295, 0xd1545b9b, 0xcc434089, 0xc74e4987,
0xae053edd, 0xa50837d3, 0xb81f2cc1, 0xb31225cf,
0x82311ae5, 0x893c13eb, 0x942b08f9, 0x9f2601f7,
0x46bde64d, 0x4db0ef43, 0x50a7f451, 0x5baafd5f,
0x6a89c275, 0x6184cb7b, 0x7c93d069, 0x779ed967,
0x1ed5ae3d, 0x15d8a733, 0x08cfbc21, 0x03c2b52f,
0x32e18a05, 0x39ec830b, 0x24fb9819, 0x2ff69117,
0x8dd64d76, 0x86db4478, 0x9bcc5f6a, 0x90c15664,
0xa1e2694e, 0xaaef6040, 0xb7f87b52, 0xbcf5725c,
0xd5be0506, 0xdeb30c08, 0xc3a4171a, 0xc8a91e14,
0xf98a213e, 0xf2872830, 0xef903322, 0xe49d3a2c,
0x3d06dd96, 0x360bd498, 0x2b1ccf8a, 0x2011c684,
0x1132f9ae, 0x1a3ff0a0, 0x0728ebb2, 0x0c25e2bc,
0x656e95e6, 0x6e639ce8, 0x737487fa, 0x78798ef4,
0x495ab1de, 0x4257b8d0, 0x5f40a3c2, 0x544daacc,
0xf7daec41, 0xfcd7e54f, 0xe1c0fe5d, 0xeacdf753,
0xdbeec879, 0xd0e3c177, 0xcdf4da65, 0xc6f9d36b,
0xafb2a431, 0xa4bfad3f, 0xb9a8b62d, 0xb2a5bf23,
0x83868009, 0x888b8907, 0x959c9215, 0x9e919b1b,
0x470a7ca1, 0x4c0775af, 0x51106ebd, 0x5a1d67b3,
0x6b3e5899, 0x60335197, 0x7d244a85, 0x7629438b,
0x1f6234d1, 0x146f3ddf, 0x097826cd, 0x02752fc3,
0x335610e9, 0x385b19e7, 0x254c02f5, 0x2e410bfb,
0x8c61d79a, 0x876cde94, 0x9a7bc586, 0x9176cc88,
0xa055f3a2, 0xab58faac, 0xb64fe1be, 0xbd42e8b0,
0xd4099fea, 0xdf0496e4, 0xc2138df6, 0xc91e84f8,
0xf83dbbd2, 0xf330b2dc, 0xee27a9ce, 0xe52aa0c0,
0x3cb1477a, 0x37bc4e74, 0x2aab5566, 0x21a65c68,
0x10856342, 0x1b886a4c, 0x069f715e, 0x0d927850,
0x64d90f0a, 0x6fd40604, 0x72c31d16, 0x79ce1418,
0x48ed2b32, 0x43e0223c, 0x5ef7392e, 0x55fa3020,
0x01b79aec, 0x0aba93e2, 0x17ad88f0, 0x1ca081fe,
0x2d83bed4, 0x268eb7da, 0x3b99acc8, 0x3094a5c6,
0x59dfd29c, 0x52d2db92, 0x4fc5c080, 0x44c8c98e,
0x75ebf6a4, 0x7ee6ffaa, 0x63f1e4b8, 0x68fcedb6,
0xb1670a0c, 0xba6a0302, 0xa77d1810, 0xac70111e,
0x9d532e34, 0x965e273a, 0x8b493c28, 0x80443526,
0xe90f427c, 0xe2024b72, 0xff155060, 0xf418596e,
0xc53b6644, 0xce366f4a, 0xd3217458, 0xd82c7d56,
0x7a0ca137, 0x7101a839, 0x6c16b32b, 0x671bba25,
0x5638850f, 0x5d358c01, 0x40229713, 0x4b2f9e1d,
0x2264e947, 0x2969e049, 0x347efb5b, 0x3f73f255,
0x0e50cd7f, 0x055dc471, 0x184adf63, 0x1347d66d,
0xcadc31d7, 0xc1d138d9, 0xdcc623cb, 0xd7cb2ac5,
0xe6e815ef, 0xede51ce1, 0xf0f207f3, 0xfbff0efd,
0x92b479a7, 0x99b970a9, 0x84ae6bbb, 0x8fa362b5,
0xbe805d9f, 0xb58d5491, 0xa89a4f83, 0xa397468d,
const uint8_t AES_ishifts[16] = {
0, 13, 10, 7, 4, 1, 14, 11, 8, 5, 2, 15, 12, 9, 6, 3
};
/* AES_imc[x][0] = [x].[0e, 09, 0d, 0b]; */
/* AES_imc[x][1] = [x].[0b, 0e, 09, 0d]; */
/* AES_imc[x][2] = [x].[0d, 0b, 0e, 09]; */
/* AES_imc[x][3] = [x].[09, 0d, 0b, 0e]; */
const uint32_t AES_imc[256][4] = {
{ 0x00000000, 0x00000000, 0x00000000, 0x00000000, }, /* x=00 */
{ 0x0E090D0B, 0x0B0E090D, 0x0D0B0E09, 0x090D0B0E, }, /* x=01 */
{ 0x1C121A16, 0x161C121A, 0x1A161C12, 0x121A161C, }, /* x=02 */
{ 0x121B171D, 0x1D121B17, 0x171D121B, 0x1B171D12, }, /* x=03 */
{ 0x3824342C, 0x2C382434, 0x342C3824, 0x24342C38, }, /* x=04 */
{ 0x362D3927, 0x27362D39, 0x3927362D, 0x2D392736, }, /* x=05 */
{ 0x24362E3A, 0x3A24362E, 0x2E3A2436, 0x362E3A24, }, /* x=06 */
{ 0x2A3F2331, 0x312A3F23, 0x23312A3F, 0x3F23312A, }, /* x=07 */
{ 0x70486858, 0x58704868, 0x68587048, 0x48685870, }, /* x=08 */
{ 0x7E416553, 0x537E4165, 0x65537E41, 0x4165537E, }, /* x=09 */
{ 0x6C5A724E, 0x4E6C5A72, 0x724E6C5A, 0x5A724E6C, }, /* x=0A */
{ 0x62537F45, 0x4562537F, 0x7F456253, 0x537F4562, }, /* x=0B */
{ 0x486C5C74, 0x74486C5C, 0x5C74486C, 0x6C5C7448, }, /* x=0C */
{ 0x4665517F, 0x7F466551, 0x517F4665, 0x65517F46, }, /* x=0D */
{ 0x547E4662, 0x62547E46, 0x4662547E, 0x7E466254, }, /* x=0E */
{ 0x5A774B69, 0x695A774B, 0x4B695A77, 0x774B695A, }, /* x=0F */
{ 0xE090D0B0, 0xB0E090D0, 0xD0B0E090, 0x90D0B0E0, }, /* x=10 */
{ 0xEE99DDBB, 0xBBEE99DD, 0xDDBBEE99, 0x99DDBBEE, }, /* x=11 */
{ 0xFC82CAA6, 0xA6FC82CA, 0xCAA6FC82, 0x82CAA6FC, }, /* x=12 */
{ 0xF28BC7AD, 0xADF28BC7, 0xC7ADF28B, 0x8BC7ADF2, }, /* x=13 */
{ 0xD8B4E49C, 0x9CD8B4E4, 0xE49CD8B4, 0xB4E49CD8, }, /* x=14 */
{ 0xD6BDE997, 0x97D6BDE9, 0xE997D6BD, 0xBDE997D6, }, /* x=15 */
{ 0xC4A6FE8A, 0x8AC4A6FE, 0xFE8AC4A6, 0xA6FE8AC4, }, /* x=16 */
{ 0xCAAFF381, 0x81CAAFF3, 0xF381CAAF, 0xAFF381CA, }, /* x=17 */
{ 0x90D8B8E8, 0xE890D8B8, 0xB8E890D8, 0xD8B8E890, }, /* x=18 */
{ 0x9ED1B5E3, 0xE39ED1B5, 0xB5E39ED1, 0xD1B5E39E, }, /* x=19 */
{ 0x8CCAA2FE, 0xFE8CCAA2, 0xA2FE8CCA, 0xCAA2FE8C, }, /* x=1A */
{ 0x82C3AFF5, 0xF582C3AF, 0xAFF582C3, 0xC3AFF582, }, /* x=1B */
{ 0xA8FC8CC4, 0xC4A8FC8C, 0x8CC4A8FC, 0xFC8CC4A8, }, /* x=1C */
{ 0xA6F581CF, 0xCFA6F581, 0x81CFA6F5, 0xF581CFA6, }, /* x=1D */
{ 0xB4EE96D2, 0xD2B4EE96, 0x96D2B4EE, 0xEE96D2B4, }, /* x=1E */
{ 0xBAE79BD9, 0xD9BAE79B, 0x9BD9BAE7, 0xE79BD9BA, }, /* x=1F */
{ 0xDB3BBB7B, 0x7BDB3BBB, 0xBB7BDB3B, 0x3BBB7BDB, }, /* x=20 */
{ 0xD532B670, 0x70D532B6, 0xB670D532, 0x32B670D5, }, /* x=21 */
{ 0xC729A16D, 0x6DC729A1, 0xA16DC729, 0x29A16DC7, }, /* x=22 */
{ 0xC920AC66, 0x66C920AC, 0xAC66C920, 0x20AC66C9, }, /* x=23 */
{ 0xE31F8F57, 0x57E31F8F, 0x8F57E31F, 0x1F8F57E3, }, /* x=24 */
{ 0xED16825C, 0x5CED1682, 0x825CED16, 0x16825CED, }, /* x=25 */
{ 0xFF0D9541, 0x41FF0D95, 0x9541FF0D, 0x0D9541FF, }, /* x=26 */
{ 0xF104984A, 0x4AF10498, 0x984AF104, 0x04984AF1, }, /* x=27 */
{ 0xAB73D323, 0x23AB73D3, 0xD323AB73, 0x73D323AB, }, /* x=28 */
{ 0xA57ADE28, 0x28A57ADE, 0xDE28A57A, 0x7ADE28A5, }, /* x=29 */
{ 0xB761C935, 0x35B761C9, 0xC935B761, 0x61C935B7, }, /* x=2A */
{ 0xB968C43E, 0x3EB968C4, 0xC43EB968, 0x68C43EB9, }, /* x=2B */
{ 0x9357E70F, 0x0F9357E7, 0xE70F9357, 0x57E70F93, }, /* x=2C */
{ 0x9D5EEA04, 0x049D5EEA, 0xEA049D5E, 0x5EEA049D, }, /* x=2D */
{ 0x8F45FD19, 0x198F45FD, 0xFD198F45, 0x45FD198F, }, /* x=2E */
{ 0x814CF012, 0x12814CF0, 0xF012814C, 0x4CF01281, }, /* x=2F */
{ 0x3BAB6BCB, 0xCB3BAB6B, 0x6BCB3BAB, 0xAB6BCB3B, }, /* x=30 */
{ 0x35A266C0, 0xC035A266, 0x66C035A2, 0xA266C035, }, /* x=31 */
{ 0x27B971DD, 0xDD27B971, 0x71DD27B9, 0xB971DD27, }, /* x=32 */
{ 0x29B07CD6, 0xD629B07C, 0x7CD629B0, 0xB07CD629, }, /* x=33 */
{ 0x038F5FE7, 0xE7038F5F, 0x5FE7038F, 0x8F5FE703, }, /* x=34 */
{ 0x0D8652EC, 0xEC0D8652, 0x52EC0D86, 0x8652EC0D, }, /* x=35 */
{ 0x1F9D45F1, 0xF11F9D45, 0x45F11F9D, 0x9D45F11F, }, /* x=36 */
{ 0x119448FA, 0xFA119448, 0x48FA1194, 0x9448FA11, }, /* x=37 */
{ 0x4BE30393, 0x934BE303, 0x03934BE3, 0xE303934B, }, /* x=38 */
{ 0x45EA0E98, 0x9845EA0E, 0x0E9845EA, 0xEA0E9845, }, /* x=39 */
{ 0x57F11985, 0x8557F119, 0x198557F1, 0xF1198557, }, /* x=3A */
{ 0x59F8148E, 0x8E59F814, 0x148E59F8, 0xF8148E59, }, /* x=3B */
{ 0x73C737BF, 0xBF73C737, 0x37BF73C7, 0xC737BF73, }, /* x=3C */
{ 0x7DCE3AB4, 0xB47DCE3A, 0x3AB47DCE, 0xCE3AB47D, }, /* x=3D */
{ 0x6FD52DA9, 0xA96FD52D, 0x2DA96FD5, 0xD52DA96F, }, /* x=3E */
{ 0x61DC20A2, 0xA261DC20, 0x20A261DC, 0xDC20A261, }, /* x=3F */
{ 0xAD766DF6, 0xF6AD766D, 0x6DF6AD76, 0x766DF6AD, }, /* x=40 */
{ 0xA37F60FD, 0xFDA37F60, 0x60FDA37F, 0x7F60FDA3, }, /* x=41 */
{ 0xB16477E0, 0xE0B16477, 0x77E0B164, 0x6477E0B1, }, /* x=42 */
{ 0xBF6D7AEB, 0xEBBF6D7A, 0x7AEBBF6D, 0x6D7AEBBF, }, /* x=43 */
{ 0x955259DA, 0xDA955259, 0x59DA9552, 0x5259DA95, }, /* x=44 */
{ 0x9B5B54D1, 0xD19B5B54, 0x54D19B5B, 0x5B54D19B, }, /* x=45 */
{ 0x894043CC, 0xCC894043, 0x43CC8940, 0x4043CC89, }, /* x=46 */
{ 0x87494EC7, 0xC787494E, 0x4EC78749, 0x494EC787, }, /* x=47 */
{ 0xDD3E05AE, 0xAEDD3E05, 0x05AEDD3E, 0x3E05AEDD, }, /* x=48 */
{ 0xD33708A5, 0xA5D33708, 0x08A5D337, 0x3708A5D3, }, /* x=49 */
{ 0xC12C1FB8, 0xB8C12C1F, 0x1FB8C12C, 0x2C1FB8C1, }, /* x=4A */
{ 0xCF2512B3, 0xB3CF2512, 0x12B3CF25, 0x2512B3CF, }, /* x=4B */
{ 0xE51A3182, 0x82E51A31, 0x3182E51A, 0x1A3182E5, }, /* x=4C */
{ 0xEB133C89, 0x89EB133C, 0x3C89EB13, 0x133C89EB, }, /* x=4D */
{ 0xF9082B94, 0x94F9082B, 0x2B94F908, 0x082B94F9, }, /* x=4E */
{ 0xF701269F, 0x9FF70126, 0x269FF701, 0x01269FF7, }, /* x=4F */
{ 0x4DE6BD46, 0x464DE6BD, 0xBD464DE6, 0xE6BD464D, }, /* x=50 */
{ 0x43EFB04D, 0x4D43EFB0, 0xB04D43EF, 0xEFB04D43, }, /* x=51 */
{ 0x51F4A750, 0x5051F4A7, 0xA75051F4, 0xF4A75051, }, /* x=52 */
{ 0x5FFDAA5B, 0x5B5FFDAA, 0xAA5B5FFD, 0xFDAA5B5F, }, /* x=53 */
{ 0x75C2896A, 0x6A75C289, 0x896A75C2, 0xC2896A75, }, /* x=54 */
{ 0x7BCB8461, 0x617BCB84, 0x84617BCB, 0xCB84617B, }, /* x=55 */
{ 0x69D0937C, 0x7C69D093, 0x937C69D0, 0xD0937C69, }, /* x=56 */
{ 0x67D99E77, 0x7767D99E, 0x9E7767D9, 0xD99E7767, }, /* x=57 */
{ 0x3DAED51E, 0x1E3DAED5, 0xD51E3DAE, 0xAED51E3D, }, /* x=58 */
{ 0x33A7D815, 0x1533A7D8, 0xD81533A7, 0xA7D81533, }, /* x=59 */
{ 0x21BCCF08, 0x0821BCCF, 0xCF0821BC, 0xBCCF0821, }, /* x=5A */
{ 0x2FB5C203, 0x032FB5C2, 0xC2032FB5, 0xB5C2032F, }, /* x=5B */
{ 0x058AE132, 0x32058AE1, 0xE132058A, 0x8AE13205, }, /* x=5C */
{ 0x0B83EC39, 0x390B83EC, 0xEC390B83, 0x83EC390B, }, /* x=5D */
{ 0x1998FB24, 0x241998FB, 0xFB241998, 0x98FB2419, }, /* x=5E */
{ 0x1791F62F, 0x2F1791F6, 0xF62F1791, 0x91F62F17, }, /* x=5F */
{ 0x764DD68D, 0x8D764DD6, 0xD68D764D, 0x4DD68D76, }, /* x=60 */
{ 0x7844DB86, 0x867844DB, 0xDB867844, 0x44DB8678, }, /* x=61 */
{ 0x6A5FCC9B, 0x9B6A5FCC, 0xCC9B6A5F, 0x5FCC9B6A, }, /* x=62 */
{ 0x6456C190, 0x906456C1, 0xC1906456, 0x56C19064, }, /* x=63 */
{ 0x4E69E2A1, 0xA14E69E2, 0xE2A14E69, 0x69E2A14E, }, /* x=64 */
{ 0x4060EFAA, 0xAA4060EF, 0xEFAA4060, 0x60EFAA40, }, /* x=65 */
{ 0x527BF8B7, 0xB7527BF8, 0xF8B7527B, 0x7BF8B752, }, /* x=66 */
{ 0x5C72F5BC, 0xBC5C72F5, 0xF5BC5C72, 0x72F5BC5C, }, /* x=67 */
{ 0x0605BED5, 0xD50605BE, 0xBED50605, 0x05BED506, }, /* x=68 */
{ 0x080CB3DE, 0xDE080CB3, 0xB3DE080C, 0x0CB3DE08, }, /* x=69 */
{ 0x1A17A4C3, 0xC31A17A4, 0xA4C31A17, 0x17A4C31A, }, /* x=6A */
{ 0x141EA9C8, 0xC8141EA9, 0xA9C8141E, 0x1EA9C814, }, /* x=6B */
{ 0x3E218AF9, 0xF93E218A, 0x8AF93E21, 0x218AF93E, }, /* x=6C */
{ 0x302887F2, 0xF2302887, 0x87F23028, 0x2887F230, }, /* x=6D */
{ 0x223390EF, 0xEF223390, 0x90EF2233, 0x3390EF22, }, /* x=6E */
{ 0x2C3A9DE4, 0xE42C3A9D, 0x9DE42C3A, 0x3A9DE42C, }, /* x=6F */
{ 0x96DD063D, 0x3D96DD06, 0x063D96DD, 0xDD063D96, }, /* x=70 */
{ 0x98D40B36, 0x3698D40B, 0x0B3698D4, 0xD40B3698, }, /* x=71 */
{ 0x8ACF1C2B, 0x2B8ACF1C, 0x1C2B8ACF, 0xCF1C2B8A, }, /* x=72 */
{ 0x84C61120, 0x2084C611, 0x112084C6, 0xC6112084, }, /* x=73 */
{ 0xAEF93211, 0x11AEF932, 0x3211AEF9, 0xF93211AE, }, /* x=74 */
{ 0xA0F03F1A, 0x1AA0F03F, 0x3F1AA0F0, 0xF03F1AA0, }, /* x=75 */
{ 0xB2EB2807, 0x07B2EB28, 0x2807B2EB, 0xEB2807B2, }, /* x=76 */
{ 0xBCE2250C, 0x0CBCE225, 0x250CBCE2, 0xE2250CBC, }, /* x=77 */
{ 0xE6956E65, 0x65E6956E, 0x6E65E695, 0x956E65E6, }, /* x=78 */
{ 0xE89C636E, 0x6EE89C63, 0x636EE89C, 0x9C636EE8, }, /* x=79 */
{ 0xFA877473, 0x73FA8774, 0x7473FA87, 0x877473FA, }, /* x=7A */
{ 0xF48E7978, 0x78F48E79, 0x7978F48E, 0x8E7978F4, }, /* x=7B */
{ 0xDEB15A49, 0x49DEB15A, 0x5A49DEB1, 0xB15A49DE, }, /* x=7C */
{ 0xD0B85742, 0x42D0B857, 0x5742D0B8, 0xB85742D0, }, /* x=7D */
{ 0xC2A3405F, 0x5FC2A340, 0x405FC2A3, 0xA3405FC2, }, /* x=7E */
{ 0xCCAA4D54, 0x54CCAA4D, 0x4D54CCAA, 0xAA4D54CC, }, /* x=7F */
{ 0x41ECDAF7, 0xF741ECDA, 0xDAF741EC, 0xECDAF741, }, /* x=80 */
{ 0x4FE5D7FC, 0xFC4FE5D7, 0xD7FC4FE5, 0xE5D7FC4F, }, /* x=81 */
{ 0x5DFEC0E1, 0xE15DFEC0, 0xC0E15DFE, 0xFEC0E15D, }, /* x=82 */
{ 0x53F7CDEA, 0xEA53F7CD, 0xCDEA53F7, 0xF7CDEA53, }, /* x=83 */
{ 0x79C8EEDB, 0xDB79C8EE, 0xEEDB79C8, 0xC8EEDB79, }, /* x=84 */
{ 0x77C1E3D0, 0xD077C1E3, 0xE3D077C1, 0xC1E3D077, }, /* x=85 */
{ 0x65DAF4CD, 0xCD65DAF4, 0xF4CD65DA, 0xDAF4CD65, }, /* x=86 */
{ 0x6BD3F9C6, 0xC66BD3F9, 0xF9C66BD3, 0xD3F9C66B, }, /* x=87 */
{ 0x31A4B2AF, 0xAF31A4B2, 0xB2AF31A4, 0xA4B2AF31, }, /* x=88 */
{ 0x3FADBFA4, 0xA43FADBF, 0xBFA43FAD, 0xADBFA43F, }, /* x=89 */
{ 0x2DB6A8B9, 0xB92DB6A8, 0xA8B92DB6, 0xB6A8B92D, }, /* x=8A */
{ 0x23BFA5B2, 0xB223BFA5, 0xA5B223BF, 0xBFA5B223, }, /* x=8B */
{ 0x09808683, 0x83098086, 0x86830980, 0x80868309, }, /* x=8C */
{ 0x07898B88, 0x8807898B, 0x8B880789, 0x898B8807, }, /* x=8D */
{ 0x15929C95, 0x9515929C, 0x9C951592, 0x929C9515, }, /* x=8E */
{ 0x1B9B919E, 0x9E1B9B91, 0x919E1B9B, 0x9B919E1B, }, /* x=8F */
{ 0xA17C0A47, 0x47A17C0A, 0x0A47A17C, 0x7C0A47A1, }, /* x=90 */
{ 0xAF75074C, 0x4CAF7507, 0x074CAF75, 0x75074CAF, }, /* x=91 */
{ 0xBD6E1051, 0x51BD6E10, 0x1051BD6E, 0x6E1051BD, }, /* x=92 */
{ 0xB3671D5A, 0x5AB3671D, 0x1D5AB367, 0x671D5AB3, }, /* x=93 */
{ 0x99583E6B, 0x6B99583E, 0x3E6B9958, 0x583E6B99, }, /* x=94 */
{ 0x97513360, 0x60975133, 0x33609751, 0x51336097, }, /* x=95 */
{ 0x854A247D, 0x7D854A24, 0x247D854A, 0x4A247D85, }, /* x=96 */
{ 0x8B432976, 0x768B4329, 0x29768B43, 0x4329768B, }, /* x=97 */
{ 0xD134621F, 0x1FD13462, 0x621FD134, 0x34621FD1, }, /* x=98 */
{ 0xDF3D6F14, 0x14DF3D6F, 0x6F14DF3D, 0x3D6F14DF, }, /* x=99 */
{ 0xCD267809, 0x09CD2678, 0x7809CD26, 0x267809CD, }, /* x=9A */
{ 0xC32F7502, 0x02C32F75, 0x7502C32F, 0x2F7502C3, }, /* x=9B */
{ 0xE9105633, 0x33E91056, 0x5633E910, 0x105633E9, }, /* x=9C */
{ 0xE7195B38, 0x38E7195B, 0x5B38E719, 0x195B38E7, }, /* x=9D */
{ 0xF5024C25, 0x25F5024C, 0x4C25F502, 0x024C25F5, }, /* x=9E */
{ 0xFB0B412E, 0x2EFB0B41, 0x412EFB0B, 0x0B412EFB, }, /* x=9F */
{ 0x9AD7618C, 0x8C9AD761, 0x618C9AD7, 0xD7618C9A, }, /* x=A0 */
{ 0x94DE6C87, 0x8794DE6C, 0x6C8794DE, 0xDE6C8794, }, /* x=A1 */
{ 0x86C57B9A, 0x9A86C57B, 0x7B9A86C5, 0xC57B9A86, }, /* x=A2 */
{ 0x88CC7691, 0x9188CC76, 0x769188CC, 0xCC769188, }, /* x=A3 */
{ 0xA2F355A0, 0xA0A2F355, 0x55A0A2F3, 0xF355A0A2, }, /* x=A4 */
{ 0xACFA58AB, 0xABACFA58, 0x58ABACFA, 0xFA58ABAC, }, /* x=A5 */
{ 0xBEE14FB6, 0xB6BEE14F, 0x4FB6BEE1, 0xE14FB6BE, }, /* x=A6 */
{ 0xB0E842BD, 0xBDB0E842, 0x42BDB0E8, 0xE842BDB0, }, /* x=A7 */
{ 0xEA9F09D4, 0xD4EA9F09, 0x09D4EA9F, 0x9F09D4EA, }, /* x=A8 */
{ 0xE49604DF, 0xDFE49604, 0x04DFE496, 0x9604DFE4, }, /* x=A9 */
{ 0xF68D13C2, 0xC2F68D13, 0x13C2F68D, 0x8D13C2F6, }, /* x=AA */
{ 0xF8841EC9, 0xC9F8841E, 0x1EC9F884, 0x841EC9F8, }, /* x=AB */
{ 0xD2BB3DF8, 0xF8D2BB3D, 0x3DF8D2BB, 0xBB3DF8D2, }, /* x=AC */
{ 0xDCB230F3, 0xF3DCB230, 0x30F3DCB2, 0xB230F3DC, }, /* x=AD */
{ 0xCEA927EE, 0xEECEA927, 0x27EECEA9, 0xA927EECE, }, /* x=AE */
{ 0xC0A02AE5, 0xE5C0A02A, 0x2AE5C0A0, 0xA02AE5C0, }, /* x=AF */
{ 0x7A47B13C, 0x3C7A47B1, 0xB13C7A47, 0x47B13C7A, }, /* x=B0 */
{ 0x744EBC37, 0x37744EBC, 0xBC37744E, 0x4EBC3774, }, /* x=B1 */
{ 0x6655AB2A, 0x2A6655AB, 0xAB2A6655, 0x55AB2A66, }, /* x=B2 */
{ 0x685CA621, 0x21685CA6, 0xA621685C, 0x5CA62168, }, /* x=B3 */
{ 0x42638510, 0x10426385, 0x85104263, 0x63851042, }, /* x=B4 */
{ 0x4C6A881B, 0x1B4C6A88, 0x881B4C6A, 0x6A881B4C, }, /* x=B5 */
{ 0x5E719F06, 0x065E719F, 0x9F065E71, 0x719F065E, }, /* x=B6 */
{ 0x5078920D, 0x0D507892, 0x920D5078, 0x78920D50, }, /* x=B7 */
{ 0x0A0FD964, 0x640A0FD9, 0xD9640A0F, 0x0FD9640A, }, /* x=B8 */
{ 0x0406D46F, 0x6F0406D4, 0xD46F0406, 0x06D46F04, }, /* x=B9 */
{ 0x161DC372, 0x72161DC3, 0xC372161D, 0x1DC37216, }, /* x=BA */
{ 0x1814CE79, 0x791814CE, 0xCE791814, 0x14CE7918, }, /* x=BB */
{ 0x322BED48, 0x48322BED, 0xED48322B, 0x2BED4832, }, /* x=BC */
{ 0x3C22E043, 0x433C22E0, 0xE0433C22, 0x22E0433C, }, /* x=BD */
{ 0x2E39F75E, 0x5E2E39F7, 0xF75E2E39, 0x39F75E2E, }, /* x=BE */
{ 0x2030FA55, 0x552030FA, 0xFA552030, 0x30FA5520, }, /* x=BF */
{ 0xEC9AB701, 0x01EC9AB7, 0xB701EC9A, 0x9AB701EC, }, /* x=C0 */
{ 0xE293BA0A, 0x0AE293BA, 0xBA0AE293, 0x93BA0AE2, }, /* x=C1 */
{ 0xF088AD17, 0x17F088AD, 0xAD17F088, 0x88AD17F0, }, /* x=C2 */
{ 0xFE81A01C, 0x1CFE81A0, 0xA01CFE81, 0x81A01CFE, }, /* x=C3 */
{ 0xD4BE832D, 0x2DD4BE83, 0x832DD4BE, 0xBE832DD4, }, /* x=C4 */
{ 0xDAB78E26, 0x26DAB78E, 0x8E26DAB7, 0xB78E26DA, }, /* x=C5 */
{ 0xC8AC993B, 0x3BC8AC99, 0x993BC8AC, 0xAC993BC8, }, /* x=C6 */
{ 0xC6A59430, 0x30C6A594, 0x9430C6A5, 0xA59430C6, }, /* x=C7 */
{ 0x9CD2DF59, 0x599CD2DF, 0xDF599CD2, 0xD2DF599C, }, /* x=C8 */
{ 0x92DBD252, 0x5292DBD2, 0xD25292DB, 0xDBD25292, }, /* x=C9 */
{ 0x80C0C54F, 0x4F80C0C5, 0xC54F80C0, 0xC0C54F80, }, /* x=CA */
{ 0x8EC9C844, 0x448EC9C8, 0xC8448EC9, 0xC9C8448E, }, /* x=CB */
{ 0xA4F6EB75, 0x75A4F6EB, 0xEB75A4F6, 0xF6EB75A4, }, /* x=CC */
{ 0xAAFFE67E, 0x7EAAFFE6, 0xE67EAAFF, 0xFFE67EAA, }, /* x=CD */
{ 0xB8E4F163, 0x63B8E4F1, 0xF163B8E4, 0xE4F163B8, }, /* x=CE */
{ 0xB6EDFC68, 0x68B6EDFC, 0xFC68B6ED, 0xEDFC68B6, }, /* x=CF */
{ 0x0C0A67B1, 0xB10C0A67, 0x67B10C0A, 0x0A67B10C, }, /* x=D0 */
{ 0x02036ABA, 0xBA02036A, 0x6ABA0203, 0x036ABA02, }, /* x=D1 */
{ 0x10187DA7, 0xA710187D, 0x7DA71018, 0x187DA710, }, /* x=D2 */
{ 0x1E1170AC, 0xAC1E1170, 0x70AC1E11, 0x1170AC1E, }, /* x=D3 */
{ 0x342E539D, 0x9D342E53, 0x539D342E, 0x2E539D34, }, /* x=D4 */
{ 0x3A275E96, 0x963A275E, 0x5E963A27, 0x275E963A, }, /* x=D5 */
{ 0x283C498B, 0x8B283C49, 0x498B283C, 0x3C498B28, }, /* x=D6 */
{ 0x26354480, 0x80263544, 0x44802635, 0x35448026, }, /* x=D7 */
{ 0x7C420FE9, 0xE97C420F, 0x0FE97C42, 0x420FE97C, }, /* x=D8 */
{ 0x724B02E2, 0xE2724B02, 0x02E2724B, 0x4B02E272, }, /* x=D9 */
{ 0x605015FF, 0xFF605015, 0x15FF6050, 0x5015FF60, }, /* x=DA */
{ 0x6E5918F4, 0xF46E5918, 0x18F46E59, 0x5918F46E, }, /* x=DB */
{ 0x44663BC5, 0xC544663B, 0x3BC54466, 0x663BC544, }, /* x=DC */
{ 0x4A6F36CE, 0xCE4A6F36, 0x36CE4A6F, 0x6F36CE4A, }, /* x=DD */
{ 0x587421D3, 0xD3587421, 0x21D35874, 0x7421D358, }, /* x=DE */
{ 0x567D2CD8, 0xD8567D2C, 0x2CD8567D, 0x7D2CD856, }, /* x=DF */
{ 0x37A10C7A, 0x7A37A10C, 0x0C7A37A1, 0xA10C7A37, }, /* x=E0 */
{ 0x39A80171, 0x7139A801, 0x017139A8, 0xA8017139, }, /* x=E1 */
{ 0x2BB3166C, 0x6C2BB316, 0x166C2BB3, 0xB3166C2B, }, /* x=E2 */
{ 0x25BA1B67, 0x6725BA1B, 0x1B6725BA, 0xBA1B6725, }, /* x=E3 */
{ 0x0F853856, 0x560F8538, 0x38560F85, 0x8538560F, }, /* x=E4 */
{ 0x018C355D, 0x5D018C35, 0x355D018C, 0x8C355D01, }, /* x=E5 */
{ 0x13972240, 0x40139722, 0x22401397, 0x97224013, }, /* x=E6 */
{ 0x1D9E2F4B, 0x4B1D9E2F, 0x2F4B1D9E, 0x9E2F4B1D, }, /* x=E7 */
{ 0x47E96422, 0x2247E964, 0x642247E9, 0xE9642247, }, /* x=E8 */
{ 0x49E06929, 0x2949E069, 0x692949E0, 0xE0692949, }, /* x=E9 */
{ 0x5BFB7E34, 0x345BFB7E, 0x7E345BFB, 0xFB7E345B, }, /* x=EA */
{ 0x55F2733F, 0x3F55F273, 0x733F55F2, 0xF2733F55, }, /* x=EB */
{ 0x7FCD500E, 0x0E7FCD50, 0x500E7FCD, 0xCD500E7F, }, /* x=EC */
{ 0x71C45D05, 0x0571C45D, 0x5D0571C4, 0xC45D0571, }, /* x=ED */
{ 0x63DF4A18, 0x1863DF4A, 0x4A1863DF, 0xDF4A1863, }, /* x=EE */
{ 0x6DD64713, 0x136DD647, 0x47136DD6, 0xD647136D, }, /* x=EF */
{ 0xD731DCCA, 0xCAD731DC, 0xDCCAD731, 0x31DCCAD7, }, /* x=F0 */
{ 0xD938D1C1, 0xC1D938D1, 0xD1C1D938, 0x38D1C1D9, }, /* x=F1 */
{ 0xCB23C6DC, 0xDCCB23C6, 0xC6DCCB23, 0x23C6DCCB, }, /* x=F2 */
{ 0xC52ACBD7, 0xD7C52ACB, 0xCBD7C52A, 0x2ACBD7C5, }, /* x=F3 */
{ 0xEF15E8E6, 0xE6EF15E8, 0xE8E6EF15, 0x15E8E6EF, }, /* x=F4 */
{ 0xE11CE5ED, 0xEDE11CE5, 0xE5EDE11C, 0x1CE5EDE1, }, /* x=F5 */
{ 0xF307F2F0, 0xF0F307F2, 0xF2F0F307, 0x07F2F0F3, }, /* x=F6 */
{ 0xFD0EFFFB, 0xFBFD0EFF, 0xFFFBFD0E, 0x0EFFFBFD, }, /* x=F7 */
{ 0xA779B492, 0x92A779B4, 0xB492A779, 0x79B492A7, }, /* x=F8 */
{ 0xA970B999, 0x99A970B9, 0xB999A970, 0x70B999A9, }, /* x=F9 */
{ 0xBB6BAE84, 0x84BB6BAE, 0xAE84BB6B, 0x6BAE84BB, }, /* x=FA */
{ 0xB562A38F, 0x8FB562A3, 0xA38FB562, 0x62A38FB5, }, /* x=FB */
{ 0x9F5D80BE, 0xBE9F5D80, 0x80BE9F5D, 0x5D80BE9F, }, /* x=FC */
{ 0x91548DB5, 0xB591548D, 0x8DB59154, 0x548DB591, }, /* x=FD */
{ 0x834F9AA8, 0xA8834F9A, 0x9AA8834F, 0x4F9AA883, }, /* x=FE */
{ 0x8D4697A3, 0xA38D4697, 0x97A38D46, 0x4697A38D, }, /* x=FF */
};
/*
AES_Te0[x] = S [x].[02, 01, 01, 03];
@@ -272,7 +395,7 @@ AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
AES_Td4[x] = Si[x].[01, 01, 01, 01];
*/
static const uint32_t AES_Te0[256] = {
const uint32_t AES_Te0[256] = {
0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
@@ -338,8 +461,7 @@ static const uint32_t AES_Te0[256] = {
0x824141c3U, 0x299999b0U, 0x5a2d2d77U, 0x1e0f0f11U,
0x7bb0b0cbU, 0xa85454fcU, 0x6dbbbbd6U, 0x2c16163aU,
};
static const uint32_t AES_Te1[256] = {
const uint32_t AES_Te1[256] = {
0xa5c66363U, 0x84f87c7cU, 0x99ee7777U, 0x8df67b7bU,
0x0dfff2f2U, 0xbdd66b6bU, 0xb1de6f6fU, 0x5491c5c5U,
0x50603030U, 0x03020101U, 0xa9ce6767U, 0x7d562b2bU,
@@ -405,8 +527,7 @@ static const uint32_t AES_Te1[256] = {
0xc3824141U, 0xb0299999U, 0x775a2d2dU, 0x111e0f0fU,
0xcb7bb0b0U, 0xfca85454U, 0xd66dbbbbU, 0x3a2c1616U,
};
static const uint32_t AES_Te2[256] = {
const uint32_t AES_Te2[256] = {
0x63a5c663U, 0x7c84f87cU, 0x7799ee77U, 0x7b8df67bU,
0xf20dfff2U, 0x6bbdd66bU, 0x6fb1de6fU, 0xc55491c5U,
0x30506030U, 0x01030201U, 0x67a9ce67U, 0x2b7d562bU,
@@ -472,8 +593,8 @@ static const uint32_t AES_Te2[256] = {
0x41c38241U, 0x99b02999U, 0x2d775a2dU, 0x0f111e0fU,
0xb0cb7bb0U, 0x54fca854U, 0xbbd66dbbU, 0x163a2c16U,
};
const uint32_t AES_Te3[256] = {
static const uint32_t AES_Te3[256] = {
0x6363a5c6U, 0x7c7c84f8U, 0x777799eeU, 0x7b7b8df6U,
0xf2f20dffU, 0x6b6bbdd6U, 0x6f6fb1deU, 0xc5c55491U,
0x30305060U, 0x01010302U, 0x6767a9ceU, 0x2b2b7d56U,
@@ -539,8 +660,7 @@ static const uint32_t AES_Te3[256] = {
0x4141c382U, 0x9999b029U, 0x2d2d775aU, 0x0f0f111eU,
0xb0b0cb7bU, 0x5454fca8U, 0xbbbbd66dU, 0x16163a2cU,
};
static const uint32_t AES_Te4[256] = {
const uint32_t AES_Te4[256] = {
0x63636363U, 0x7c7c7c7cU, 0x77777777U, 0x7b7b7b7bU,
0xf2f2f2f2U, 0x6b6b6b6bU, 0x6f6f6f6fU, 0xc5c5c5c5U,
0x30303030U, 0x01010101U, 0x67676767U, 0x2b2b2b2bU,
@@ -606,8 +726,7 @@ static const uint32_t AES_Te4[256] = {
0x41414141U, 0x99999999U, 0x2d2d2d2dU, 0x0f0f0f0fU,
0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
};
static const uint32_t AES_Td0[256] = {
const uint32_t AES_Td0[256] = {
0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
@@ -673,8 +792,7 @@ static const uint32_t AES_Td0[256] = {
0x39a80171U, 0x080cb3deU, 0xd8b4e49cU, 0x6456c190U,
0x7bcb8461U, 0xd532b670U, 0x486c5c74U, 0xd0b85742U,
};
static const uint32_t AES_Td1[256] = {
const uint32_t AES_Td1[256] = {
0x5051f4a7U, 0x537e4165U, 0xc31a17a4U, 0x963a275eU,
0xcb3bab6bU, 0xf11f9d45U, 0xabacfa58U, 0x934be303U,
0x552030faU, 0xf6ad766dU, 0x9188cc76U, 0x25f5024cU,
@@ -740,8 +858,7 @@ static const uint32_t AES_Td1[256] = {
0x7139a801U, 0xde080cb3U, 0x9cd8b4e4U, 0x906456c1U,
0x617bcb84U, 0x70d532b6U, 0x74486c5cU, 0x42d0b857U,
};
static const uint32_t AES_Td2[256] = {
const uint32_t AES_Td2[256] = {
0xa75051f4U, 0x65537e41U, 0xa4c31a17U, 0x5e963a27U,
0x6bcb3babU, 0x45f11f9dU, 0x58abacfaU, 0x03934be3U,
0xfa552030U, 0x6df6ad76U, 0x769188ccU, 0x4c25f502U,
@@ -808,8 +925,7 @@ static const uint32_t AES_Td2[256] = {
0x017139a8U, 0xb3de080cU, 0xe49cd8b4U, 0xc1906456U,
0x84617bcbU, 0xb670d532U, 0x5c74486cU, 0x5742d0b8U,
};
static const uint32_t AES_Td3[256] = {
const uint32_t AES_Td3[256] = {
0xf4a75051U, 0x4165537eU, 0x17a4c31aU, 0x275e963aU,
0xab6bcb3bU, 0x9d45f11fU, 0xfa58abacU, 0xe303934bU,
0x30fa5520U, 0x766df6adU, 0xcc769188U, 0x024c25f5U,
@@ -875,8 +991,7 @@ static const uint32_t AES_Td3[256] = {
0xa8017139U, 0x0cb3de08U, 0xb4e49cd8U, 0x56c19064U,
0xcb84617bU, 0x32b670d5U, 0x6c5c7448U, 0xb85742d0U,
};
static const uint32_t AES_Td4[256] = {
const uint32_t AES_Td4[256] = {
0x52525252U, 0x09090909U, 0x6a6a6a6aU, 0xd5d5d5d5U,
0x30303030U, 0x36363636U, 0xa5a5a5a5U, 0x38383838U,
0xbfbfbfbfU, 0x40404040U, 0xa3a3a3a3U, 0x9e9e9e9eU,
@@ -942,351 +1057,12 @@ static const uint32_t AES_Td4[256] = {
0xe1e1e1e1U, 0x69696969U, 0x14141414U, 0x63636363U,
0x55555555U, 0x21212121U, 0x0c0c0c0cU, 0x7d7d7d7dU,
};
static const u32 rcon[] = {
0x01000000, 0x02000000, 0x04000000, 0x08000000,
0x10000000, 0x20000000, 0x40000000, 0x80000000,
0x1B000000, 0x36000000, /* for 128-bit blocks, Rijndael never uses more than 10 rcon values */
};
/*
* Perform MixColumns.
*/
static inline void
aesenc_MC_swap(AESState *r, const AESState *st, bool swap)
{
int swap_b = swap * 0xf;
int swap_w = swap * 0x3;
bool be = HOST_BIG_ENDIAN ^ swap;
uint32_t t;
/* Note that AES_mc_rot is encoded for little-endian. */
t = ( AES_mc_rot[st->b[swap_b ^ 0x0]] ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x1]], 8) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x2]], 16) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x3]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 0] = t;
t = ( AES_mc_rot[st->b[swap_b ^ 0x4]] ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x5]], 8) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x6]], 16) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x7]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 1] = t;
t = ( AES_mc_rot[st->b[swap_b ^ 0x8]] ^
rol32(AES_mc_rot[st->b[swap_b ^ 0x9]], 8) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0xA]], 16) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0xB]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 2] = t;
t = ( AES_mc_rot[st->b[swap_b ^ 0xC]] ^
rol32(AES_mc_rot[st->b[swap_b ^ 0xD]], 8) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0xE]], 16) ^
rol32(AES_mc_rot[st->b[swap_b ^ 0xF]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 3] = t;
}
void aesenc_MC_gen(AESState *r, const AESState *st)
{
aesenc_MC_swap(r, st, false);
}
void aesenc_MC_genrev(AESState *r, const AESState *st)
{
aesenc_MC_swap(r, st, true);
}
/*
* Perform SubBytes + ShiftRows + AddRoundKey.
*/
static inline void
aesenc_SB_SR_AK_swap(AESState *ret, const AESState *st,
const AESState *rk, bool swap)
{
const int swap_b = swap ? 15 : 0;
AESState t;
t.b[swap_b ^ 0x0] = AES_sbox[st->b[swap_b ^ AES_SH(0x0)]];
t.b[swap_b ^ 0x1] = AES_sbox[st->b[swap_b ^ AES_SH(0x1)]];
t.b[swap_b ^ 0x2] = AES_sbox[st->b[swap_b ^ AES_SH(0x2)]];
t.b[swap_b ^ 0x3] = AES_sbox[st->b[swap_b ^ AES_SH(0x3)]];
t.b[swap_b ^ 0x4] = AES_sbox[st->b[swap_b ^ AES_SH(0x4)]];
t.b[swap_b ^ 0x5] = AES_sbox[st->b[swap_b ^ AES_SH(0x5)]];
t.b[swap_b ^ 0x6] = AES_sbox[st->b[swap_b ^ AES_SH(0x6)]];
t.b[swap_b ^ 0x7] = AES_sbox[st->b[swap_b ^ AES_SH(0x7)]];
t.b[swap_b ^ 0x8] = AES_sbox[st->b[swap_b ^ AES_SH(0x8)]];
t.b[swap_b ^ 0x9] = AES_sbox[st->b[swap_b ^ AES_SH(0x9)]];
t.b[swap_b ^ 0xa] = AES_sbox[st->b[swap_b ^ AES_SH(0xA)]];
t.b[swap_b ^ 0xb] = AES_sbox[st->b[swap_b ^ AES_SH(0xB)]];
t.b[swap_b ^ 0xc] = AES_sbox[st->b[swap_b ^ AES_SH(0xC)]];
t.b[swap_b ^ 0xd] = AES_sbox[st->b[swap_b ^ AES_SH(0xD)]];
t.b[swap_b ^ 0xe] = AES_sbox[st->b[swap_b ^ AES_SH(0xE)]];
t.b[swap_b ^ 0xf] = AES_sbox[st->b[swap_b ^ AES_SH(0xF)]];
/*
* Perform the AddRoundKey with generic vectors.
* This may be expanded to either host integer or host vector code.
* The key and output endianness match, so no bswap required.
*/
ret->v = t.v ^ rk->v;
}
void aesenc_SB_SR_AK_gen(AESState *r, const AESState *s, const AESState *k)
{
aesenc_SB_SR_AK_swap(r, s, k, false);
}
void aesenc_SB_SR_AK_genrev(AESState *r, const AESState *s, const AESState *k)
{
aesenc_SB_SR_AK_swap(r, s, k, true);
}
/*
* Perform SubBytes + ShiftRows + MixColumns + AddRoundKey.
*/
static inline void
aesenc_SB_SR_MC_AK_swap(AESState *r, const AESState *st,
const AESState *rk, bool swap)
{
int swap_b = swap * 0xf;
int swap_w = swap * 0x3;
bool be = HOST_BIG_ENDIAN ^ swap;
uint32_t w0, w1, w2, w3;
w0 = (AES_Te0[st->b[swap_b ^ AES_SH(0x0)]] ^
AES_Te1[st->b[swap_b ^ AES_SH(0x1)]] ^
AES_Te2[st->b[swap_b ^ AES_SH(0x2)]] ^
AES_Te3[st->b[swap_b ^ AES_SH(0x3)]]);
w1 = (AES_Te0[st->b[swap_b ^ AES_SH(0x4)]] ^
AES_Te1[st->b[swap_b ^ AES_SH(0x5)]] ^
AES_Te2[st->b[swap_b ^ AES_SH(0x6)]] ^
AES_Te3[st->b[swap_b ^ AES_SH(0x7)]]);
w2 = (AES_Te0[st->b[swap_b ^ AES_SH(0x8)]] ^
AES_Te1[st->b[swap_b ^ AES_SH(0x9)]] ^
AES_Te2[st->b[swap_b ^ AES_SH(0xA)]] ^
AES_Te3[st->b[swap_b ^ AES_SH(0xB)]]);
w3 = (AES_Te0[st->b[swap_b ^ AES_SH(0xC)]] ^
AES_Te1[st->b[swap_b ^ AES_SH(0xD)]] ^
AES_Te2[st->b[swap_b ^ AES_SH(0xE)]] ^
AES_Te3[st->b[swap_b ^ AES_SH(0xF)]]);
/* Note that AES_TeX is encoded for big-endian. */
if (!be) {
w0 = bswap32(w0);
w1 = bswap32(w1);
w2 = bswap32(w2);
w3 = bswap32(w3);
}
r->w[swap_w ^ 0] = rk->w[swap_w ^ 0] ^ w0;
r->w[swap_w ^ 1] = rk->w[swap_w ^ 1] ^ w1;
r->w[swap_w ^ 2] = rk->w[swap_w ^ 2] ^ w2;
r->w[swap_w ^ 3] = rk->w[swap_w ^ 3] ^ w3;
}
void aesenc_SB_SR_MC_AK_gen(AESState *r, const AESState *st,
const AESState *rk)
{
aesenc_SB_SR_MC_AK_swap(r, st, rk, false);
}
void aesenc_SB_SR_MC_AK_genrev(AESState *r, const AESState *st,
const AESState *rk)
{
aesenc_SB_SR_MC_AK_swap(r, st, rk, true);
}
/*
* Perform InvMixColumns.
*/
static inline void
aesdec_IMC_swap(AESState *r, const AESState *st, bool swap)
{
int swap_b = swap * 0xf;
int swap_w = swap * 0x3;
bool be = HOST_BIG_ENDIAN ^ swap;
uint32_t t;
/* Note that AES_imc_rot is encoded for little-endian. */
t = ( AES_imc_rot[st->b[swap_b ^ 0x0]] ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x1]], 8) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x2]], 16) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x3]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 0] = t;
t = ( AES_imc_rot[st->b[swap_b ^ 0x4]] ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x5]], 8) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x6]], 16) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x7]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 1] = t;
t = ( AES_imc_rot[st->b[swap_b ^ 0x8]] ^
rol32(AES_imc_rot[st->b[swap_b ^ 0x9]], 8) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0xA]], 16) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0xB]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 2] = t;
t = ( AES_imc_rot[st->b[swap_b ^ 0xC]] ^
rol32(AES_imc_rot[st->b[swap_b ^ 0xD]], 8) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0xE]], 16) ^
rol32(AES_imc_rot[st->b[swap_b ^ 0xF]], 24));
if (be) {
t = bswap32(t);
}
r->w[swap_w ^ 3] = t;
}
void aesdec_IMC_gen(AESState *r, const AESState *st)
{
aesdec_IMC_swap(r, st, false);
}
void aesdec_IMC_genrev(AESState *r, const AESState *st)
{
aesdec_IMC_swap(r, st, true);
}
/*
* Perform InvSubBytes + InvShiftRows + AddRoundKey.
*/
static inline void
aesdec_ISB_ISR_AK_swap(AESState *ret, const AESState *st,
const AESState *rk, bool swap)
{
const int swap_b = swap ? 15 : 0;
AESState t;
t.b[swap_b ^ 0x0] = AES_isbox[st->b[swap_b ^ AES_ISH(0x0)]];
t.b[swap_b ^ 0x1] = AES_isbox[st->b[swap_b ^ AES_ISH(0x1)]];
t.b[swap_b ^ 0x2] = AES_isbox[st->b[swap_b ^ AES_ISH(0x2)]];
t.b[swap_b ^ 0x3] = AES_isbox[st->b[swap_b ^ AES_ISH(0x3)]];
t.b[swap_b ^ 0x4] = AES_isbox[st->b[swap_b ^ AES_ISH(0x4)]];
t.b[swap_b ^ 0x5] = AES_isbox[st->b[swap_b ^ AES_ISH(0x5)]];
t.b[swap_b ^ 0x6] = AES_isbox[st->b[swap_b ^ AES_ISH(0x6)]];
t.b[swap_b ^ 0x7] = AES_isbox[st->b[swap_b ^ AES_ISH(0x7)]];
t.b[swap_b ^ 0x8] = AES_isbox[st->b[swap_b ^ AES_ISH(0x8)]];
t.b[swap_b ^ 0x9] = AES_isbox[st->b[swap_b ^ AES_ISH(0x9)]];
t.b[swap_b ^ 0xa] = AES_isbox[st->b[swap_b ^ AES_ISH(0xA)]];
t.b[swap_b ^ 0xb] = AES_isbox[st->b[swap_b ^ AES_ISH(0xB)]];
t.b[swap_b ^ 0xc] = AES_isbox[st->b[swap_b ^ AES_ISH(0xC)]];
t.b[swap_b ^ 0xd] = AES_isbox[st->b[swap_b ^ AES_ISH(0xD)]];
t.b[swap_b ^ 0xe] = AES_isbox[st->b[swap_b ^ AES_ISH(0xE)]];
t.b[swap_b ^ 0xf] = AES_isbox[st->b[swap_b ^ AES_ISH(0xF)]];
/*
* Perform the AddRoundKey with generic vectors.
* This may be expanded to either host integer or host vector code.
* The key and output endianness match, so no bswap required.
*/
ret->v = t.v ^ rk->v;
}
void aesdec_ISB_ISR_AK_gen(AESState *r, const AESState *s, const AESState *k)
{
aesdec_ISB_ISR_AK_swap(r, s, k, false);
}
void aesdec_ISB_ISR_AK_genrev(AESState *r, const AESState *s, const AESState *k)
{
aesdec_ISB_ISR_AK_swap(r, s, k, true);
}
/*
* Perform InvSubBytes + InvShiftRows + InvMixColumns + AddRoundKey.
*/
static inline void
aesdec_ISB_ISR_IMC_AK_swap(AESState *r, const AESState *st,
const AESState *rk, bool swap)
{
int swap_b = swap * 0xf;
int swap_w = swap * 0x3;
bool be = HOST_BIG_ENDIAN ^ swap;
uint32_t w0, w1, w2, w3;
w0 = (AES_Td0[st->b[swap_b ^ AES_ISH(0x0)]] ^
AES_Td1[st->b[swap_b ^ AES_ISH(0x1)]] ^
AES_Td2[st->b[swap_b ^ AES_ISH(0x2)]] ^
AES_Td3[st->b[swap_b ^ AES_ISH(0x3)]]);
w1 = (AES_Td0[st->b[swap_b ^ AES_ISH(0x4)]] ^
AES_Td1[st->b[swap_b ^ AES_ISH(0x5)]] ^
AES_Td2[st->b[swap_b ^ AES_ISH(0x6)]] ^
AES_Td3[st->b[swap_b ^ AES_ISH(0x7)]]);
w2 = (AES_Td0[st->b[swap_b ^ AES_ISH(0x8)]] ^
AES_Td1[st->b[swap_b ^ AES_ISH(0x9)]] ^
AES_Td2[st->b[swap_b ^ AES_ISH(0xA)]] ^
AES_Td3[st->b[swap_b ^ AES_ISH(0xB)]]);
w3 = (AES_Td0[st->b[swap_b ^ AES_ISH(0xC)]] ^
AES_Td1[st->b[swap_b ^ AES_ISH(0xD)]] ^
AES_Td2[st->b[swap_b ^ AES_ISH(0xE)]] ^
AES_Td3[st->b[swap_b ^ AES_ISH(0xF)]]);
/* Note that AES_TdX is encoded for big-endian. */
if (!be) {
w0 = bswap32(w0);
w1 = bswap32(w1);
w2 = bswap32(w2);
w3 = bswap32(w3);
}
r->w[swap_w ^ 0] = rk->w[swap_w ^ 0] ^ w0;
r->w[swap_w ^ 1] = rk->w[swap_w ^ 1] ^ w1;
r->w[swap_w ^ 2] = rk->w[swap_w ^ 2] ^ w2;
r->w[swap_w ^ 3] = rk->w[swap_w ^ 3] ^ w3;
}
void aesdec_ISB_ISR_IMC_AK_gen(AESState *r, const AESState *st,
const AESState *rk)
{
aesdec_ISB_ISR_IMC_AK_swap(r, st, rk, false);
}
void aesdec_ISB_ISR_IMC_AK_genrev(AESState *r, const AESState *st,
const AESState *rk)
{
aesdec_ISB_ISR_IMC_AK_swap(r, st, rk, true);
}
void aesdec_ISB_ISR_AK_IMC_gen(AESState *ret, const AESState *st,
const AESState *rk)
{
aesdec_ISB_ISR_AK_gen(ret, st, rk);
aesdec_IMC_gen(ret, ret);
}
void aesdec_ISB_ISR_AK_IMC_genrev(AESState *ret, const AESState *st,
const AESState *rk)
{
aesdec_ISB_ISR_AK_genrev(ret, st, rk);
aesdec_IMC_genrev(ret, ret);
}
/**
* Expand the cipher key into the encryption key schedule.
*/

View File

@@ -706,14 +706,14 @@ qcrypto_block_luks_store_key(QCryptoBlock *block,
assert(slot_idx < QCRYPTO_BLOCK_LUKS_NUM_KEY_SLOTS);
slot = &luks->header.key_slots[slot_idx];
splitkeylen = luks->header.master_key_len * slot->stripes;
if (qcrypto_random_bytes(slot->salt,
QCRYPTO_BLOCK_LUKS_SALT_LEN,
errp) < 0) {
goto cleanup;
}
splitkeylen = luks->header.master_key_len * slot->stripes;
/*
* Determine how many iterations are required to
* hash the user password while consuming 1 second of compute

View File

@@ -6,11 +6,7 @@ common_ss.add(when: 'CONFIG_M68K_DIS', if_true: files('m68k.c'))
common_ss.add(when: 'CONFIG_MICROBLAZE_DIS', if_true: files('microblaze.c'))
common_ss.add(when: 'CONFIG_MIPS_DIS', if_true: files('mips.c', 'nanomips.c'))
common_ss.add(when: 'CONFIG_NIOS2_DIS', if_true: files('nios2.c'))
common_ss.add(when: 'CONFIG_RISCV_DIS', if_true: files(
'riscv.c',
'riscv-xthead.c',
'riscv-xventana.c'
))
common_ss.add(when: 'CONFIG_RISCV_DIS', if_true: files('riscv.c'))
common_ss.add(when: 'CONFIG_SH4_DIS', if_true: files('sh4.c'))
common_ss.add(when: 'CONFIG_SPARC_DIS', if_true: files('sparc.c'))
common_ss.add(when: 'CONFIG_XTENSA_DIS', if_true: files('xtensa.c'))

View File

@@ -1,707 +0,0 @@
/*
* QEMU RISC-V Disassembler for xthead.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "disas/riscv.h"
#include "disas/riscv-xthead.h"
typedef enum {
/* 0 is reserved for rv_op_illegal. */
/* XTheadBa */
rv_op_th_addsl = 1,
/* XTheadBb */
rv_op_th_srri,
rv_op_th_srriw,
rv_op_th_ext,
rv_op_th_extu,
rv_op_th_ff0,
rv_op_th_ff1,
rv_op_th_rev,
rv_op_th_revw,
rv_op_th_tstnbz,
/* XTheadBs */
rv_op_th_tst,
/* XTheadCmo */
rv_op_th_dcache_call,
rv_op_th_dcache_ciall,
rv_op_th_dcache_iall,
rv_op_th_dcache_cpa,
rv_op_th_dcache_cipa,
rv_op_th_dcache_ipa,
rv_op_th_dcache_cva,
rv_op_th_dcache_civa,
rv_op_th_dcache_iva,
rv_op_th_dcache_csw,
rv_op_th_dcache_cisw,
rv_op_th_dcache_isw,
rv_op_th_dcache_cpal1,
rv_op_th_dcache_cval1,
rv_op_th_icache_iall,
rv_op_th_icache_ialls,
rv_op_th_icache_ipa,
rv_op_th_icache_iva,
rv_op_th_l2cache_call,
rv_op_th_l2cache_ciall,
rv_op_th_l2cache_iall,
/* XTheadCondMov */
rv_op_th_mveqz,
rv_op_th_mvnez,
/* XTheadFMemIdx */
rv_op_th_flrd,
rv_op_th_flrw,
rv_op_th_flurd,
rv_op_th_flurw,
rv_op_th_fsrd,
rv_op_th_fsrw,
rv_op_th_fsurd,
rv_op_th_fsurw,
/* XTheadFmv */
rv_op_th_fmv_hw_x,
rv_op_th_fmv_x_hw,
/* XTheadMac */
rv_op_th_mula,
rv_op_th_mulah,
rv_op_th_mulaw,
rv_op_th_muls,
rv_op_th_mulsw,
rv_op_th_mulsh,
/* XTheadMemIdx */
rv_op_th_lbia,
rv_op_th_lbib,
rv_op_th_lbuia,
rv_op_th_lbuib,
rv_op_th_lhia,
rv_op_th_lhib,
rv_op_th_lhuia,
rv_op_th_lhuib,
rv_op_th_lwia,
rv_op_th_lwib,
rv_op_th_lwuia,
rv_op_th_lwuib,
rv_op_th_ldia,
rv_op_th_ldib,
rv_op_th_sbia,
rv_op_th_sbib,
rv_op_th_shia,
rv_op_th_shib,
rv_op_th_swia,
rv_op_th_swib,
rv_op_th_sdia,
rv_op_th_sdib,
rv_op_th_lrb,
rv_op_th_lrbu,
rv_op_th_lrh,
rv_op_th_lrhu,
rv_op_th_lrw,
rv_op_th_lrwu,
rv_op_th_lrd,
rv_op_th_srb,
rv_op_th_srh,
rv_op_th_srw,
rv_op_th_srd,
rv_op_th_lurb,
rv_op_th_lurbu,
rv_op_th_lurh,
rv_op_th_lurhu,
rv_op_th_lurw,
rv_op_th_lurwu,
rv_op_th_lurd,
rv_op_th_surb,
rv_op_th_surh,
rv_op_th_surw,
rv_op_th_surd,
/* XTheadMemPair */
rv_op_th_ldd,
rv_op_th_lwd,
rv_op_th_lwud,
rv_op_th_sdd,
rv_op_th_swd,
/* XTheadSync */
rv_op_th_sfence_vmas,
rv_op_th_sync,
rv_op_th_sync_i,
rv_op_th_sync_is,
rv_op_th_sync_s,
} rv_xthead_op;
const rv_opcode_data xthead_opcode_data[] = {
{ "th.illegal", rv_codec_illegal, rv_fmt_none, NULL, 0, 0, 0 },
/* XTheadBa */
{ "th.addsl", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
/* XTheadBb */
{ "th.srri", rv_codec_r2_imm6, rv_fmt_rd_rs1_imm, NULL, 0, 0, 0 },
{ "th.srriw", rv_codec_r2_imm5, rv_fmt_rd_rs1_imm, NULL, 0, 0, 0 },
{ "th.ext", rv_codec_r2_immhl, rv_fmt_rd_rs1_immh_imml, NULL, 0, 0, 0 },
{ "th.extu", rv_codec_r2_immhl, rv_fmt_rd_rs1_immh_imml, NULL, 0, 0, 0 },
{ "th.ff0", rv_codec_r2, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "th.ff1", rv_codec_r2, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "th.rev", rv_codec_r2, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "th.revw", rv_codec_r2, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "th.tstnbz", rv_codec_r2, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
/* XTheadBs */
{ "th.tst", rv_codec_r2_imm6, rv_fmt_rd_rs1_imm, NULL, 0, 0, 0 },
/* XTheadCmo */
{ "th.dcache.call", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.dcache.ciall", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.dcache.iall", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.dcache.cpa", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.cipa", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.ipa", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.cva", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.civa", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.iva", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.csw", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.cisw", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.isw", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.cpal1", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.dcache.cval1", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.icache.iall", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.icache.ialls", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.icache.ipa", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.icache.iva", rv_codec_r, rv_fmt_rs1, NULL, 0, 0, 0 },
{ "th.l2cache.call", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.l2cache.ciall", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.l2cache.iall", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
/* XTheadCondMov */
{ "th.mveqz", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.mvnez", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
/* XTheadFMemIdx */
{ "th.flrd", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.flrw", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.flurd", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.flurw", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.fsrd", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.fsrw", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.fsurd", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.fsurw", rv_codec_r_imm2, rv_fmt_frd_rs1_rs2_imm, NULL, 0, 0, 0 },
/* XTheadFmv */
{ "th.fmv.hw.x", rv_codec_r, rv_fmt_rd_frs1, NULL, 0, 0, 0 },
{ "th.fmv.x.hw", rv_codec_r, rv_fmt_rd_frs1, NULL, 0, 0, 0 },
/* XTheadMac */
{ "th.mula", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.mulaw", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.mulah", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.muls", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.mulsw", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "th.mulsh", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
/* XTheadMemIdx */
{ "th.lbia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lbib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml, NULL, 0, 0, 0 },
{ "th.lbuia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lbuib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lhia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lhib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lhuia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lhuib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lwia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lwib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lwuia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lwuib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.ldia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.ldib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.sbia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.sbib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.shia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.shib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.swia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.swib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.sdia", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.sdib", rv_codec_r2_imm2_imm5, rv_fmt_rd_rs1_immh_imml_addr, NULL, 0, 0, 0 },
{ "th.lrb", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrbu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrh", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrhu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrw", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrwu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lrd", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.srb", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.srh", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.srw", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.srd", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurb", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurbu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurh", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurhu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurw", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurwu", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.lurd", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.surb", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.surh", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.surw", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
{ "th.surd", rv_codec_r_imm2, rv_fmt_rd_rs1_rs2_imm, NULL, 0, 0, 0 },
/* XTheadMemPair */
{ "th.ldd", rv_codec_r_imm2, rv_fmt_rd2_imm, NULL, 0, 0, 0 },
{ "th.lwd", rv_codec_r_imm2, rv_fmt_rd2_imm, NULL, 0, 0, 0 },
{ "th.lwud", rv_codec_r_imm2, rv_fmt_rd2_imm, NULL, 0, 0, 0 },
{ "th.sdd", rv_codec_r_imm2, rv_fmt_rd2_imm, NULL, 0, 0, 0 },
{ "th.swd", rv_codec_r_imm2, rv_fmt_rd2_imm, NULL, 0, 0, 0 },
/* XTheadSync */
{ "th.sfence.vmas", rv_codec_r, rv_fmt_rs1_rs2, NULL, 0, 0, 0 },
{ "th.sync", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.sync.i", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.sync.is", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
{ "th.sync.s", rv_codec_none, rv_fmt_none, NULL, 0, 0, 0 },
};
void decode_xtheadba(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 25) & 0b1111111) {
case 0b0000000:
case 0b0000001:
case 0b0000010:
case 0b0000011: op = rv_op_th_addsl; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadbb(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 25) & 0b1111111) {
case 0b0001010: op = rv_op_th_srriw; break;
case 0b1000000:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_tstnbz;
}
break;
case 0b1000001:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_rev;
}
break;
case 0b1000010:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_ff0;
}
break;
case 0b1000011:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_ff1;
}
break;
case 0b1000100:
case 0b1001000:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_revw;
}
break;
case 0b0000100:
case 0b0000101: op = rv_op_th_srri; break;
}
break;
case 2: op = rv_op_th_ext; break;
case 3: op = rv_op_th_extu; break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadbs(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 26) & 0b111111) {
case 0b100010: op = rv_op_th_tst; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadcmo(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 0:
switch ((inst >> 20 & 0b111111111111)) {
case 0b000000000001:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_dcache_call;
}
break;
case 0b000000000011:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_dcache_ciall;
}
break;
case 0b000000000010:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_dcache_iall;
}
break;
case 0b000000101001: op = rv_op_th_dcache_cpa; break;
case 0b000000101011: op = rv_op_th_dcache_cipa; break;
case 0b000000101010: op = rv_op_th_dcache_ipa; break;
case 0b000000100101: op = rv_op_th_dcache_cva; break;
case 0b000000100111: op = rv_op_th_dcache_civa; break;
case 0b000000100110: op = rv_op_th_dcache_iva; break;
case 0b000000100001: op = rv_op_th_dcache_csw; break;
case 0b000000100011: op = rv_op_th_dcache_cisw; break;
case 0b000000100010: op = rv_op_th_dcache_isw; break;
case 0b000000101000: op = rv_op_th_dcache_cpal1; break;
case 0b000000100100: op = rv_op_th_dcache_cval1; break;
case 0b000000010000:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_icache_iall;
}
break;
case 0b000000010001:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_icache_ialls;
}
break;
case 0b000000111000: op = rv_op_th_icache_ipa; break;
case 0b000000110000: op = rv_op_th_icache_iva; break;
case 0b000000010101:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_l2cache_call;
}
break;
case 0b000000010111:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_l2cache_ciall;
}
break;
case 0b000000010110:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_l2cache_iall;
}
break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadcondmov(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 25) & 0b1111111) {
case 0b0100000: op = rv_op_th_mveqz; break;
case 0b0100001: op = rv_op_th_mvnez; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadfmemidx(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 6:
switch ((inst >> 27) & 0b11111) {
case 8: op = rv_op_th_flrw; break;
case 10: op = rv_op_th_flurw; break;
case 12: op = rv_op_th_flrd; break;
case 14: op = rv_op_th_flurd; break;
}
break;
case 7:
switch ((inst >> 27) & 0b11111) {
case 8: op = rv_op_th_fsrw; break;
case 10: op = rv_op_th_fsurw; break;
case 12: op = rv_op_th_fsrd; break;
case 14: op = rv_op_th_fsurd; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadfmv(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 25) & 0b1111111) {
case 0b1010000:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_fmv_hw_x;
}
break;
case 0b1100000:
if (((inst >> 20) & 0b11111) == 0) {
op = rv_op_th_fmv_x_hw;
}
break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadmac(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 1:
switch ((inst >> 25) & 0b1111111) {
case 0b0010000: op = rv_op_th_mula; break;
case 0b0010001: op = rv_op_th_muls; break;
case 0b0010010: op = rv_op_th_mulaw; break;
case 0b0010011: op = rv_op_th_mulsw; break;
case 0b0010100: op = rv_op_th_mulah; break;
case 0b0010101: op = rv_op_th_mulsh; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadmemidx(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 4:
switch ((inst >> 27) & 0b11111) {
case 0: op = rv_op_th_lrb; break;
case 1: op = rv_op_th_lbib; break;
case 2: op = rv_op_th_lurb; break;
case 3: op = rv_op_th_lbia; break;
case 4: op = rv_op_th_lrh; break;
case 5: op = rv_op_th_lhib; break;
case 6: op = rv_op_th_lurh; break;
case 7: op = rv_op_th_lhia; break;
case 8: op = rv_op_th_lrw; break;
case 9: op = rv_op_th_lwib; break;
case 10: op = rv_op_th_lurw; break;
case 11: op = rv_op_th_lwia; break;
case 12: op = rv_op_th_lrd; break;
case 13: op = rv_op_th_ldib; break;
case 14: op = rv_op_th_lurd; break;
case 15: op = rv_op_th_ldia; break;
case 16: op = rv_op_th_lrbu; break;
case 17: op = rv_op_th_lbuib; break;
case 18: op = rv_op_th_lurbu; break;
case 19: op = rv_op_th_lbuia; break;
case 20: op = rv_op_th_lrhu; break;
case 21: op = rv_op_th_lhuib; break;
case 22: op = rv_op_th_lurhu; break;
case 23: op = rv_op_th_lhuia; break;
case 24: op = rv_op_th_lrwu; break;
case 25: op = rv_op_th_lwuib; break;
case 26: op = rv_op_th_lurwu; break;
case 27: op = rv_op_th_lwuia; break;
}
break;
case 5:
switch ((inst >> 27) & 0b11111) {
case 0: op = rv_op_th_srb; break;
case 1: op = rv_op_th_sbib; break;
case 2: op = rv_op_th_surb; break;
case 3: op = rv_op_th_sbia; break;
case 4: op = rv_op_th_srh; break;
case 5: op = rv_op_th_shib; break;
case 6: op = rv_op_th_surh; break;
case 7: op = rv_op_th_shia; break;
case 8: op = rv_op_th_srw; break;
case 9: op = rv_op_th_swib; break;
case 10: op = rv_op_th_surw; break;
case 11: op = rv_op_th_swia; break;
case 12: op = rv_op_th_srd; break;
case 13: op = rv_op_th_sdib; break;
case 14: op = rv_op_th_surd; break;
case 15: op = rv_op_th_sdia; break;
}
break;
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadmempair(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 4:
switch ((inst >> 27) & 0b11111) {
case 28: op = rv_op_th_lwd; break;
case 30: op = rv_op_th_lwud; break;
case 31: op = rv_op_th_ldd; break;
}
break;
case 5:
switch ((inst >> 27) & 0b11111) {
case 28: op = rv_op_th_swd; break;
case 31: op = rv_op_th_sdd; break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}
void decode_xtheadsync(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 2:
/* custom-0 */
switch ((inst >> 12) & 0b111) {
case 0:
switch ((inst >> 25) & 0b1111111) {
case 0b0000010: op = rv_op_th_sfence_vmas; break;
case 0b0000000:
switch ((inst >> 20) & 0b11111) {
case 0b11000: op = rv_op_th_sync; break;
case 0b11010: op = rv_op_th_sync_i; break;
case 0b11011: op = rv_op_th_sync_is; break;
case 0b11001: op = rv_op_th_sync_s; break;
}
break;
}
break;
}
break;
/* custom-0 */
}
break;
}
dec->op = op;
}

View File

@@ -1,28 +0,0 @@
/*
* QEMU disassembler -- RISC-V specific header (xthead*).
*
* Copyright (c) 2023 VRULL GmbH
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef DISAS_RISCV_XTHEAD_H
#define DISAS_RISCV_XTHEAD_H
#include "disas/riscv.h"
extern const rv_opcode_data xthead_opcode_data[];
void decode_xtheadba(rv_decode *, rv_isa);
void decode_xtheadbb(rv_decode *, rv_isa);
void decode_xtheadbs(rv_decode *, rv_isa);
void decode_xtheadcmo(rv_decode *, rv_isa);
void decode_xtheadcondmov(rv_decode *, rv_isa);
void decode_xtheadfmemidx(rv_decode *, rv_isa);
void decode_xtheadfmv(rv_decode *, rv_isa);
void decode_xtheadmac(rv_decode *, rv_isa);
void decode_xtheadmemidx(rv_decode *, rv_isa);
void decode_xtheadmempair(rv_decode *, rv_isa);
void decode_xtheadsync(rv_decode *, rv_isa);
#endif /* DISAS_RISCV_XTHEAD_H */

View File

@@ -1,41 +0,0 @@
/*
* QEMU RISC-V Disassembler for xventana.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "disas/riscv.h"
#include "disas/riscv-xventana.h"
typedef enum {
/* 0 is reserved for rv_op_illegal. */
ventana_op_vt_maskc = 1,
ventana_op_vt_maskcn = 2,
} rv_ventana_op;
const rv_opcode_data ventana_opcode_data[] = {
{ "vt.illegal", rv_codec_illegal, rv_fmt_none, NULL, 0, 0, 0 },
{ "vt.maskc", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "vt.maskcn", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
};
void decode_xventanacondops(rv_decode *dec, rv_isa isa)
{
rv_inst inst = dec->inst;
rv_opcode op = rv_op_illegal;
switch (((inst >> 0) & 0b11)) {
case 3:
switch (((inst >> 2) & 0b11111)) {
case 30:
switch (((inst >> 22) & 0b1111111000) | ((inst >> 12) & 0b0000000111)) {
case 6: op = ventana_op_vt_maskc; break;
case 7: op = ventana_op_vt_maskcn; break;
}
break;
}
break;
}
dec->op = op;
}

View File

@@ -1,18 +0,0 @@
/*
* QEMU disassembler -- RISC-V specific header (xventana*).
*
* Copyright (c) 2023 VRULL GmbH
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef DISAS_RISCV_XVENTANA_H
#define DISAS_RISCV_XVENTANA_H
#include "disas/riscv.h"
extern const rv_opcode_data ventana_opcode_data[];
void decode_xventanacondops(rv_decode*, rv_isa);
#endif /* DISAS_RISCV_XVENTANA_H */

View File

@@ -18,17 +18,162 @@
*/
#include "qemu/osdep.h"
#include "qemu/bitops.h"
#include "disas/dis-asm.h"
#include "target/riscv/cpu_cfg.h"
#include "disas/riscv.h"
/* Vendor extensions */
#include "disas/riscv-xthead.h"
#include "disas/riscv-xventana.h"
/* types */
typedef uint64_t rv_inst;
typedef uint16_t rv_opcode;
/* enums */
typedef enum {
/* 0 is reserved for rv_op_illegal. */
rv32,
rv64,
rv128
} rv_isa;
typedef enum {
rv_rm_rne = 0,
rv_rm_rtz = 1,
rv_rm_rdn = 2,
rv_rm_rup = 3,
rv_rm_rmm = 4,
rv_rm_dyn = 7,
} rv_rm;
typedef enum {
rv_fence_i = 8,
rv_fence_o = 4,
rv_fence_r = 2,
rv_fence_w = 1,
} rv_fence;
typedef enum {
rv_ireg_zero,
rv_ireg_ra,
rv_ireg_sp,
rv_ireg_gp,
rv_ireg_tp,
rv_ireg_t0,
rv_ireg_t1,
rv_ireg_t2,
rv_ireg_s0,
rv_ireg_s1,
rv_ireg_a0,
rv_ireg_a1,
rv_ireg_a2,
rv_ireg_a3,
rv_ireg_a4,
rv_ireg_a5,
rv_ireg_a6,
rv_ireg_a7,
rv_ireg_s2,
rv_ireg_s3,
rv_ireg_s4,
rv_ireg_s5,
rv_ireg_s6,
rv_ireg_s7,
rv_ireg_s8,
rv_ireg_s9,
rv_ireg_s10,
rv_ireg_s11,
rv_ireg_t3,
rv_ireg_t4,
rv_ireg_t5,
rv_ireg_t6,
} rv_ireg;
typedef enum {
rvc_end,
rvc_rd_eq_ra,
rvc_rd_eq_x0,
rvc_rs1_eq_x0,
rvc_rs2_eq_x0,
rvc_rs2_eq_rs1,
rvc_rs1_eq_ra,
rvc_imm_eq_zero,
rvc_imm_eq_n1,
rvc_imm_eq_p1,
rvc_csr_eq_0x001,
rvc_csr_eq_0x002,
rvc_csr_eq_0x003,
rvc_csr_eq_0xc00,
rvc_csr_eq_0xc01,
rvc_csr_eq_0xc02,
rvc_csr_eq_0xc80,
rvc_csr_eq_0xc81,
rvc_csr_eq_0xc82,
} rvc_constraint;
typedef enum {
rv_codec_illegal,
rv_codec_none,
rv_codec_u,
rv_codec_uj,
rv_codec_i,
rv_codec_i_sh5,
rv_codec_i_sh6,
rv_codec_i_sh7,
rv_codec_i_csr,
rv_codec_s,
rv_codec_sb,
rv_codec_r,
rv_codec_r_m,
rv_codec_r4_m,
rv_codec_r_a,
rv_codec_r_l,
rv_codec_r_f,
rv_codec_cb,
rv_codec_cb_imm,
rv_codec_cb_sh5,
rv_codec_cb_sh6,
rv_codec_ci,
rv_codec_ci_sh5,
rv_codec_ci_sh6,
rv_codec_ci_16sp,
rv_codec_ci_lwsp,
rv_codec_ci_ldsp,
rv_codec_ci_lqsp,
rv_codec_ci_li,
rv_codec_ci_lui,
rv_codec_ci_none,
rv_codec_ciw_4spn,
rv_codec_cj,
rv_codec_cj_jal,
rv_codec_cl_lw,
rv_codec_cl_ld,
rv_codec_cl_lq,
rv_codec_cr,
rv_codec_cr_mv,
rv_codec_cr_jalr,
rv_codec_cr_jr,
rv_codec_cs,
rv_codec_cs_sw,
rv_codec_cs_sd,
rv_codec_cs_sq,
rv_codec_css_swsp,
rv_codec_css_sdsp,
rv_codec_css_sqsp,
rv_codec_k_bs,
rv_codec_k_rnum,
rv_codec_v_r,
rv_codec_v_ldst,
rv_codec_v_i,
rv_codec_vsetvli,
rv_codec_vsetivli,
rv_codec_zcb_ext,
rv_codec_zcb_mul,
rv_codec_zcb_lb,
rv_codec_zcb_lh,
rv_codec_zcmp_cm_pushpop,
rv_codec_zcmp_cm_mv,
rv_codec_zcmt_jt,
} rv_codec;
typedef enum {
rv_op_illegal = 0,
rv_op_lui = 1,
rv_op_auipc = 2,
rv_op_jal = 3,
@@ -819,51 +964,53 @@ typedef enum {
rv_op_cm_jalt = 788,
rv_op_czero_eqz = 789,
rv_op_czero_nez = 790,
rv_op_fcvt_bf16_s = 791,
rv_op_fcvt_s_bf16 = 792,
rv_op_vfncvtbf16_f_f_w = 793,
rv_op_vfwcvtbf16_f_f_v = 794,
rv_op_vfwmaccbf16_vv = 795,
rv_op_vfwmaccbf16_vf = 796,
rv_op_flh = 797,
rv_op_fsh = 798,
rv_op_fmv_h_x = 799,
rv_op_fmv_x_h = 800,
rv_op_fli_s = 801,
rv_op_fli_d = 802,
rv_op_fli_q = 803,
rv_op_fli_h = 804,
rv_op_fminm_s = 805,
rv_op_fmaxm_s = 806,
rv_op_fminm_d = 807,
rv_op_fmaxm_d = 808,
rv_op_fminm_q = 809,
rv_op_fmaxm_q = 810,
rv_op_fminm_h = 811,
rv_op_fmaxm_h = 812,
rv_op_fround_s = 813,
rv_op_froundnx_s = 814,
rv_op_fround_d = 815,
rv_op_froundnx_d = 816,
rv_op_fround_q = 817,
rv_op_froundnx_q = 818,
rv_op_fround_h = 819,
rv_op_froundnx_h = 820,
rv_op_fcvtmod_w_d = 821,
rv_op_fmvh_x_d = 822,
rv_op_fmvp_d_x = 823,
rv_op_fmvh_x_q = 824,
rv_op_fmvp_q_x = 825,
rv_op_fleq_s = 826,
rv_op_fltq_s = 827,
rv_op_fleq_d = 828,
rv_op_fltq_d = 829,
rv_op_fleq_q = 830,
rv_op_fltq_q = 831,
rv_op_fleq_h = 832,
rv_op_fltq_h = 833,
} rv_op;
/* structures */
typedef struct {
RISCVCPUConfig *cfg;
uint64_t pc;
uint64_t inst;
int32_t imm;
uint16_t op;
uint8_t codec;
uint8_t rd;
uint8_t rs1;
uint8_t rs2;
uint8_t rs3;
uint8_t rm;
uint8_t pred;
uint8_t succ;
uint8_t aq;
uint8_t rl;
uint8_t bs;
uint8_t rnum;
uint8_t vm;
uint32_t vzimm;
uint8_t rlist;
} rv_decode;
typedef struct {
const int op;
const rvc_constraint *constraints;
} rv_comp_data;
enum {
rvcd_imm_nz = 0x1
};
typedef struct {
const char * const name;
const rv_codec codec;
const char * const format;
const rv_comp_data *pseudo;
const short decomp_rv32;
const short decomp_rv64;
const short decomp_rv128;
const short decomp_data;
} rv_opcode_data;
/* register names */
static const char rv_ireg_name_sym[32][5] = {
@@ -887,22 +1034,78 @@ static const char rv_vreg_name_sym[32][4] = {
"v24", "v25", "v26", "v27", "v28", "v29", "v30", "v31"
};
/* The FLI.[HSDQ] numeric constants (0.0 for symbolic constants).
* The constants use the hex floating-point literal representation
* that is printed when using the printf %a format specifier,
* which matches the output that is generated by the disassembler.
*/
static const char rv_fli_name_const[32][9] =
{
"0x1p+0", "min", "0x1p-16", "0x1p-15",
"0x1p-8", "0x1p-7", "0x1p-4", "0x1p-3",
"0x1p-2", "0x1.4p-2", "0x1.8p-2", "0x1.cp-2",
"0x1p-1", "0x1.4p-1", "0x1.8p-1", "0x1.cp-1",
"0x1p+0", "0x1.4p+0", "0x1.8p+0", "0x1.cp+0",
"0x1p+1", "0x1.4p+1", "0x1.8p+1", "0x1p+2",
"0x1p+3", "0x1p+4", "0x1p+7", "0x1p+8",
"0x1p+15", "0x1p+16", "inf", "nan"
};
/* instruction formats */
#define rv_fmt_none "O\t"
#define rv_fmt_rs1 "O\t1"
#define rv_fmt_offset "O\to"
#define rv_fmt_pred_succ "O\tp,s"
#define rv_fmt_rs1_rs2 "O\t1,2"
#define rv_fmt_rd_imm "O\t0,i"
#define rv_fmt_rd_offset "O\t0,o"
#define rv_fmt_rd_rs1_rs2 "O\t0,1,2"
#define rv_fmt_frd_rs1 "O\t3,1"
#define rv_fmt_frd_frs1 "O\t3,4"
#define rv_fmt_rd_frs1 "O\t0,4"
#define rv_fmt_rd_frs1_frs2 "O\t0,4,5"
#define rv_fmt_frd_frs1_frs2 "O\t3,4,5"
#define rv_fmt_rm_frd_frs1 "O\tr,3,4"
#define rv_fmt_rm_frd_rs1 "O\tr,3,1"
#define rv_fmt_rm_rd_frs1 "O\tr,0,4"
#define rv_fmt_rm_frd_frs1_frs2 "O\tr,3,4,5"
#define rv_fmt_rm_frd_frs1_frs2_frs3 "O\tr,3,4,5,6"
#define rv_fmt_rd_rs1_imm "O\t0,1,i"
#define rv_fmt_rd_rs1_offset "O\t0,1,i"
#define rv_fmt_rd_offset_rs1 "O\t0,i(1)"
#define rv_fmt_frd_offset_rs1 "O\t3,i(1)"
#define rv_fmt_rd_csr_rs1 "O\t0,c,1"
#define rv_fmt_rd_csr_zimm "O\t0,c,7"
#define rv_fmt_rs2_offset_rs1 "O\t2,i(1)"
#define rv_fmt_frs2_offset_rs1 "O\t5,i(1)"
#define rv_fmt_rs1_rs2_offset "O\t1,2,o"
#define rv_fmt_rs2_rs1_offset "O\t2,1,o"
#define rv_fmt_aqrl_rd_rs2_rs1 "OAR\t0,2,(1)"
#define rv_fmt_aqrl_rd_rs1 "OAR\t0,(1)"
#define rv_fmt_rd "O\t0"
#define rv_fmt_rd_zimm "O\t0,7"
#define rv_fmt_rd_rs1 "O\t0,1"
#define rv_fmt_rd_rs2 "O\t0,2"
#define rv_fmt_rs1_offset "O\t1,o"
#define rv_fmt_rs2_offset "O\t2,o"
#define rv_fmt_rs1_rs2_bs "O\t1,2,b"
#define rv_fmt_rd_rs1_rnum "O\t0,1,n"
#define rv_fmt_ldst_vd_rs1_vm "O\tD,(1)m"
#define rv_fmt_ldst_vd_rs1_rs2_vm "O\tD,(1),2m"
#define rv_fmt_ldst_vd_rs1_vs2_vm "O\tD,(1),Fm"
#define rv_fmt_vd_vs2_vs1 "O\tD,F,E"
#define rv_fmt_vd_vs2_vs1_vl "O\tD,F,El"
#define rv_fmt_vd_vs2_vs1_vm "O\tD,F,Em"
#define rv_fmt_vd_vs2_rs1_vl "O\tD,F,1l"
#define rv_fmt_vd_vs2_fs1_vl "O\tD,F,4l"
#define rv_fmt_vd_vs2_rs1_vm "O\tD,F,1m"
#define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m"
#define rv_fmt_vd_vs2_imm_vl "O\tD,F,il"
#define rv_fmt_vd_vs2_imm_vm "O\tD,F,im"
#define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um"
#define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm"
#define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm"
#define rv_fmt_vd_fs1_vs2_vm "O\tD,4,Fm"
#define rv_fmt_vd_vs1 "O\tD,E"
#define rv_fmt_vd_rs1 "O\tD,1"
#define rv_fmt_vd_fs1 "O\tD,4"
#define rv_fmt_vd_imm "O\tD,i"
#define rv_fmt_vd_vs2 "O\tD,F"
#define rv_fmt_vd_vs2_vm "O\tD,Fm"
#define rv_fmt_rd_vs2_vm "O\t0,Fm"
#define rv_fmt_rd_vs2 "O\t0,F"
#define rv_fmt_fd_vs2 "O\t3,F"
#define rv_fmt_vd_vm "O\tDm"
#define rv_fmt_vsetvli "O\t0,1,v"
#define rv_fmt_vsetivli "O\t0,u,v"
#define rv_fmt_rs1_rs2_zce_ldst "O\t2,i(1)"
#define rv_fmt_push_rlist "O\tx,-i"
#define rv_fmt_pop_rlist "O\tx,i"
#define rv_fmt_zcmt_index "O\ti"
/* pseudo-instruction constraints */
@@ -1133,10 +1336,10 @@ static const rv_comp_data rvcp_fsgnjx_q[] = {
/* instruction metadata */
const rv_opcode_data rvi_opcode_data[] = {
const rv_opcode_data opcode_data[] = {
{ "illegal", rv_codec_illegal, rv_fmt_none, NULL, 0, 0, 0 },
{ "lui", rv_codec_u, rv_fmt_rd_uimm, NULL, 0, 0, 0 },
{ "auipc", rv_codec_u, rv_fmt_rd_uoffset, NULL, 0, 0, 0 },
{ "lui", rv_codec_u, rv_fmt_rd_imm, NULL, 0, 0, 0 },
{ "auipc", rv_codec_u, rv_fmt_rd_offset, NULL, 0, 0, 0 },
{ "jal", rv_codec_uj, rv_fmt_rd_offset, rvcp_jal, 0, 0, 0 },
{ "jalr", rv_codec_i, rv_fmt_rd_rs1_offset, rvcp_jalr, 0, 0, 0 },
{ "beq", rv_codec_sb, rv_fmt_rs1_rs2_offset, rvcp_beq, 0, 0, 0 },
@@ -1382,7 +1585,7 @@ const rv_opcode_data rvi_opcode_data[] = {
rv_op_addi },
{ "c.addi16sp", rv_codec_ci_16sp, rv_fmt_rd_rs1_imm, NULL, rv_op_addi,
rv_op_addi, rv_op_addi, rvcd_imm_nz },
{ "c.lui", rv_codec_ci_lui, rv_fmt_rd_uimm, NULL, rv_op_lui, rv_op_lui,
{ "c.lui", rv_codec_ci_lui, rv_fmt_rd_imm, NULL, rv_op_lui, rv_op_lui,
rv_op_lui, rvcd_imm_nz },
{ "c.srli", rv_codec_cb_sh6, rv_fmt_rd_rs1_imm, NULL, rv_op_srli,
rv_op_srli, rv_op_srli, rvcd_imm_nz },
@@ -1965,49 +2168,6 @@ const rv_opcode_data rvi_opcode_data[] = {
{ "cm.jalt", rv_codec_zcmt_jt, rv_fmt_zcmt_index, NULL, 0 },
{ "czero.eqz", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "czero.nez", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "fcvt.bf16.s", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "fcvt.s.bf16", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "vfncvtbf16.f.f.w", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vfwcvtbf16.f.f.v", rv_codec_v_r, rv_fmt_vd_vs2_vm, NULL, 0, 0, 0 },
{ "vfwmaccbf16.vv", rv_codec_v_r, rv_fmt_vd_vs1_vs2_vm, NULL, 0, 0, 0 },
{ "vfwmaccbf16.vf", rv_codec_v_r, rv_fmt_vd_fs1_vs2_vm, NULL, 0, 0, 0 },
{ "flh", rv_codec_i, rv_fmt_frd_offset_rs1, NULL, 0, 0, 0 },
{ "fsh", rv_codec_s, rv_fmt_frs2_offset_rs1, NULL, 0, 0, 0 },
{ "fmv.h.x", rv_codec_r, rv_fmt_frd_rs1, NULL, 0, 0, 0 },
{ "fmv.x.h", rv_codec_r, rv_fmt_rd_frs1, NULL, 0, 0, 0 },
{ "fli.s", rv_codec_fli, rv_fmt_fli, NULL, 0, 0, 0 },
{ "fli.d", rv_codec_fli, rv_fmt_fli, NULL, 0, 0, 0 },
{ "fli.q", rv_codec_fli, rv_fmt_fli, NULL, 0, 0, 0 },
{ "fli.h", rv_codec_fli, rv_fmt_fli, NULL, 0, 0, 0 },
{ "fminm.s", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fmaxm.s", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fminm.d", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fmaxm.d", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fminm.q", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fmaxm.q", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fminm.h", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fmaxm.h", rv_codec_r, rv_fmt_frd_frs1_frs2, NULL, 0, 0, 0 },
{ "fround.s", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "froundnx.s", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "fround.d", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "froundnx.d", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "fround.q", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "froundnx.q", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "fround.h", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "froundnx.h", rv_codec_r_m, rv_fmt_rm_frd_frs1, NULL, 0, 0, 0 },
{ "fcvtmod.w.d", rv_codec_r_m, rv_fmt_rm_rd_frs1, NULL, 0, 0, 0 },
{ "fmvh.x.d", rv_codec_r, rv_fmt_rd_frs1, NULL, 0, 0, 0 },
{ "fmvp.d.x", rv_codec_r, rv_fmt_frd_rs1_rs2, NULL, 0, 0, 0 },
{ "fmvh.x.q", rv_codec_r, rv_fmt_rd_frs1, NULL, 0, 0, 0 },
{ "fmvp.q.x", rv_codec_r, rv_fmt_frd_rs1_rs2, NULL, 0, 0, 0 },
{ "fleq.s", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fltq.s", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fleq.d", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fltq.d", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fleq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fltq.q", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fleq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
{ "fltq.h", rv_codec_r, rv_fmt_rd_frs1_frs2, NULL, 0, 0, 0 },
};
/* CSR names */
@@ -2116,8 +2276,8 @@ static const char *csr_name(int csrno)
case 0x03ba: return "pmpaddr10";
case 0x03bb: return "pmpaddr11";
case 0x03bc: return "pmpaddr12";
case 0x03bd: return "pmpaddr13";
case 0x03be: return "pmpaddr14";
case 0x03bd: return "pmpaddr14";
case 0x03be: return "pmpaddr13";
case 0x03bf: return "pmpaddr15";
case 0x0780: return "mtohost";
case 0x0781: return "mfromhost";
@@ -2483,7 +2643,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 3: op = rv_op_vloxei8_v; break;
}
break;
case 1: op = rv_op_flh; break;
case 2: op = rv_op_flw; break;
case 3: op = rv_op_fld; break;
case 4: op = rv_op_flq; break;
@@ -2687,7 +2846,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 3: op = rv_op_vsoxei8_v; break;
}
break;
case 1: op = rv_op_fsh; break;
case 2: op = rv_op_fsw; break;
case 3: op = rv_op_fsd; break;
case 4: op = rv_op_fsq; break;
@@ -2947,62 +3105,36 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
switch ((inst >> 12) & 0b111) {
case 0: op = rv_op_fmin_s; break;
case 1: op = rv_op_fmax_s; break;
case 2: op = rv_op_fminm_s; break;
case 3: op = rv_op_fmaxm_s; break;
}
break;
case 21:
switch ((inst >> 12) & 0b111) {
case 0: op = rv_op_fmin_d; break;
case 1: op = rv_op_fmax_d; break;
case 2: op = rv_op_fminm_d; break;
case 3: op = rv_op_fmaxm_d; break;
}
break;
case 22:
switch (((inst >> 12) & 0b111)) {
case 2: op = rv_op_fminm_h; break;
case 3: op = rv_op_fmaxm_h; break;
}
break;
case 23:
switch ((inst >> 12) & 0b111) {
case 0: op = rv_op_fmin_q; break;
case 1: op = rv_op_fmax_q; break;
case 2: op = rv_op_fminm_q; break;
case 3: op = rv_op_fmaxm_q; break;
}
break;
case 32:
switch ((inst >> 20) & 0b11111) {
case 1: op = rv_op_fcvt_s_d; break;
case 3: op = rv_op_fcvt_s_q; break;
case 4: op = rv_op_fround_s; break;
case 5: op = rv_op_froundnx_s; break;
case 6: op = rv_op_fcvt_s_bf16; break;
}
break;
case 33:
switch ((inst >> 20) & 0b11111) {
case 0: op = rv_op_fcvt_d_s; break;
case 3: op = rv_op_fcvt_d_q; break;
case 4: op = rv_op_fround_d; break;
case 5: op = rv_op_froundnx_d; break;
}
break;
case 34:
switch (((inst >> 20) & 0b11111)) {
case 4: op = rv_op_fround_h; break;
case 5: op = rv_op_froundnx_h; break;
case 8: op = rv_op_fcvt_bf16_s; break;
}
break;
case 35:
switch ((inst >> 20) & 0b11111) {
case 0: op = rv_op_fcvt_q_s; break;
case 1: op = rv_op_fcvt_q_d; break;
case 4: op = rv_op_fround_q; break;
case 5: op = rv_op_froundnx_q; break;
}
break;
case 44:
@@ -3025,8 +3157,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 0: op = rv_op_fle_s; break;
case 1: op = rv_op_flt_s; break;
case 2: op = rv_op_feq_s; break;
case 4: op = rv_op_fleq_s; break;
case 5: op = rv_op_fltq_s; break;
}
break;
case 81:
@@ -3034,14 +3164,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 0: op = rv_op_fle_d; break;
case 1: op = rv_op_flt_d; break;
case 2: op = rv_op_feq_d; break;
case 4: op = rv_op_fleq_d; break;
case 5: op = rv_op_fltq_d; break;
}
break;
case 82:
switch (((inst >> 12) & 0b111)) {
case 4: op = rv_op_fleq_h; break;
case 5: op = rv_op_fltq_h; break;
}
break;
case 83:
@@ -3049,18 +3171,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 0: op = rv_op_fle_q; break;
case 1: op = rv_op_flt_q; break;
case 2: op = rv_op_feq_q; break;
case 4: op = rv_op_fleq_q; break;
case 5: op = rv_op_fltq_q; break;
}
break;
case 89:
switch (((inst >> 12) & 0b111)) {
case 0: op = rv_op_fmvp_d_x; break;
}
break;
case 91:
switch (((inst >> 12) & 0b111)) {
case 0: op = rv_op_fmvp_q_x; break;
}
break;
case 96:
@@ -3077,7 +3187,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 1: op = rv_op_fcvt_wu_d; break;
case 2: op = rv_op_fcvt_l_d; break;
case 3: op = rv_op_fcvt_lu_d; break;
case 8: op = rv_op_fcvtmod_w_d; break;
}
break;
case 99:
@@ -3124,13 +3233,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_x_d; break;
case 1: op = rv_op_fclass_d; break;
case 8: op = rv_op_fmvh_x_d; break;
}
break;
case 114:
switch (((inst >> 17) & 0b11111000) |
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_x_h; break;
}
break;
case 115:
@@ -3138,35 +3240,24 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_x_q; break;
case 1: op = rv_op_fclass_q; break;
case 8: op = rv_op_fmvh_x_q; break;
}
break;
case 120:
switch (((inst >> 17) & 0b11111000) |
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_s_x; break;
case 8: op = rv_op_fli_s; break;
}
break;
case 121:
switch (((inst >> 17) & 0b11111000) |
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_d_x; break;
case 8: op = rv_op_fli_d; break;
}
break;
case 122:
switch (((inst >> 17) & 0b11111000) |
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_h_x; break;
case 8: op = rv_op_fli_h; break;
}
break;
case 123:
switch (((inst >> 17) & 0b11111000) |
((inst >> 12) & 0b00000111)) {
case 0: op = rv_op_fmv_q_x; break;
case 8: op = rv_op_fli_q; break;
}
break;
}
@@ -3259,7 +3350,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 10: op = rv_op_vfwcvt_f_xu_v; break;
case 11: op = rv_op_vfwcvt_f_x_v; break;
case 12: op = rv_op_vfwcvt_f_f_v; break;
case 13: op = rv_op_vfwcvtbf16_f_f_v; break;
case 14: op = rv_op_vfwcvt_rtz_xu_f_v; break;
case 15: op = rv_op_vfwcvt_rtz_x_f_v; break;
case 16: op = rv_op_vfncvt_xu_f_w; break;
@@ -3270,7 +3360,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 21: op = rv_op_vfncvt_rod_f_f_w; break;
case 22: op = rv_op_vfncvt_rtz_xu_f_w; break;
case 23: op = rv_op_vfncvt_rtz_x_f_w; break;
case 29: op = rv_op_vfncvtbf16_f_f_w; break;
}
break;
case 19:
@@ -3302,7 +3391,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 52: op = rv_op_vfwadd_wv; break;
case 54: op = rv_op_vfwsub_wv; break;
case 56: op = rv_op_vfwmul_vv; break;
case 59: op = rv_op_vfwmaccbf16_vv; break;
case 60: op = rv_op_vfwmacc_vv; break;
case 61: op = rv_op_vfwnmacc_vv; break;
case 62: op = rv_op_vfwmsac_vv; break;
@@ -3541,7 +3629,6 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
case 52: op = rv_op_vfwadd_wf; break;
case 54: op = rv_op_vfwsub_wf; break;
case 56: op = rv_op_vfwmul_vf; break;
case 59: op = rv_op_vfwmaccbf16_vf; break;
case 60: op = rv_op_vfwmacc_vf; break;
case 61: op = rv_op_vfwnmacc_vf; break;
case 62: op = rv_op_vfwmsac_vf; break;
@@ -4047,26 +4134,6 @@ static uint32_t operand_zcmp_rlist(rv_inst inst)
return ((inst << 56) >> 60);
}
static uint32_t operand_imm6(rv_inst inst)
{
return (inst << 38) >> 60;
}
static uint32_t operand_imm2(rv_inst inst)
{
return (inst << 37) >> 62;
}
static uint32_t operand_immh(rv_inst inst)
{
return (inst << 32) >> 58;
}
static uint32_t operand_imml(rv_inst inst)
{
return (inst << 38) >> 58;
}
static uint32_t calculate_stack_adj(rv_isa isa, uint32_t rlist, uint32_t spimm)
{
int xlen_bytes_log2 = isa == rv64 ? 3 : 2;
@@ -4090,7 +4157,6 @@ static uint32_t operand_tbl_index(rv_inst inst)
static void decode_inst_operands(rv_decode *dec, rv_isa isa)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
rv_inst inst = dec->inst;
dec->codec = opcode_data[dec->op].codec;
switch (dec->codec) {
@@ -4430,42 +4496,6 @@ static void decode_inst_operands(rv_decode *dec, rv_isa isa)
break;
case rv_codec_zcmt_jt:
dec->imm = operand_tbl_index(inst);
break;
case rv_codec_fli:
dec->rd = operand_rd(inst);
dec->imm = operand_rs1(inst);
break;
case rv_codec_r2_imm5:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
dec->imm = operand_rs2(inst);
break;
case rv_codec_r2:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
break;
case rv_codec_r2_imm6:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
dec->imm = operand_imm6(inst);
break;
case rv_codec_r_imm2:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
dec->rs2 = operand_rs2(inst);
dec->imm = operand_imm2(inst);
break;
case rv_codec_r2_immhl:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
dec->imm = operand_immh(inst);
dec->imm1 = operand_imml(inst);
break;
case rv_codec_r2_imm2_imm5:
dec->rd = operand_rd(inst);
dec->rs1 = operand_rs1(inst);
dec->imm = sextract32(operand_rs2(inst), 0, 5);
dec->imm1 = operand_imm2(inst);
break;
};
}
@@ -4609,7 +4639,6 @@ static void append(char *s1, const char *s2, size_t n)
static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
char tmp[64];
const char *fmt;
@@ -4680,10 +4709,6 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
snprintf(tmp, sizeof(tmp), "%u", ((uint32_t)dec->imm & 0b11111));
append(buf, tmp, buflen);
break;
case 'j':
snprintf(tmp, sizeof(tmp), "%d", dec->imm1);
append(buf, tmp, buflen);
break;
case 'o':
snprintf(tmp, sizeof(tmp), "%d", dec->imm);
append(buf, tmp, buflen);
@@ -4694,19 +4719,6 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
dec->pc + dec->imm);
append(buf, tmp, buflen);
break;
case 'U':
fmt++;
snprintf(tmp, sizeof(tmp), "%d", dec->imm >> 12);
append(buf, tmp, buflen);
if (*fmt == 'o') {
while (strlen(buf) < tab * 2) {
append(buf, " ", buflen);
}
snprintf(tmp, sizeof(tmp), "# 0x%" PRIx64,
dec->pc + dec->imm);
append(buf, tmp, buflen);
}
break;
case 'c': {
const char *name = csr_name(dec->imm & 0xfff);
if (name) {
@@ -4857,9 +4869,6 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
append(buf, tmp, buflen);
break;
}
case 'h':
append(buf, rv_fli_name_const[dec->imm], buflen);
break;
default:
break;
}
@@ -4871,7 +4880,6 @@ static void format_inst(char *buf, size_t buflen, size_t tab, rv_decode *dec)
static void decode_inst_lift_pseudo(rv_decode *dec)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
const rv_comp_data *comp_data = opcode_data[dec->op].pseudo;
if (!comp_data) {
return;
@@ -4890,7 +4898,6 @@ static void decode_inst_lift_pseudo(rv_decode *dec)
static void decode_inst_decompress_rv32(rv_decode *dec)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
int decomp_op = opcode_data[dec->op].decomp_rv32;
if (decomp_op != rv_op_illegal) {
if ((opcode_data[dec->op].decomp_data & rvcd_imm_nz)
@@ -4905,7 +4912,6 @@ static void decode_inst_decompress_rv32(rv_decode *dec)
static void decode_inst_decompress_rv64(rv_decode *dec)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
int decomp_op = opcode_data[dec->op].decomp_rv64;
if (decomp_op != rv_op_illegal) {
if ((opcode_data[dec->op].decomp_data & rvcd_imm_nz)
@@ -4920,7 +4926,6 @@ static void decode_inst_decompress_rv64(rv_decode *dec)
static void decode_inst_decompress_rv128(rv_decode *dec)
{
const rv_opcode_data *opcode_data = dec->opcode_data;
int decomp_op = opcode_data[dec->op].decomp_rv128;
if (decomp_op != rv_op_illegal) {
if ((opcode_data[dec->op].decomp_data & rvcd_imm_nz)
@@ -4958,44 +4963,7 @@ disasm_inst(char *buf, size_t buflen, rv_isa isa, uint64_t pc, rv_inst inst,
dec.pc = pc;
dec.inst = inst;
dec.cfg = cfg;
static const struct {
bool (*guard_func)(const RISCVCPUConfig *);
const rv_opcode_data *opcode_data;
void (*decode_func)(rv_decode *, rv_isa);
} decoders[] = {
{ always_true_p, rvi_opcode_data, decode_inst_opcode },
{ has_xtheadba_p, xthead_opcode_data, decode_xtheadba },
{ has_xtheadbb_p, xthead_opcode_data, decode_xtheadbb },
{ has_xtheadbs_p, xthead_opcode_data, decode_xtheadbs },
{ has_xtheadcmo_p, xthead_opcode_data, decode_xtheadcmo },
{ has_xtheadcondmov_p, xthead_opcode_data, decode_xtheadcondmov },
{ has_xtheadfmemidx_p, xthead_opcode_data, decode_xtheadfmemidx },
{ has_xtheadfmv_p, xthead_opcode_data, decode_xtheadfmv },
{ has_xtheadmac_p, xthead_opcode_data, decode_xtheadmac },
{ has_xtheadmemidx_p, xthead_opcode_data, decode_xtheadmemidx },
{ has_xtheadmempair_p, xthead_opcode_data, decode_xtheadmempair },
{ has_xtheadsync_p, xthead_opcode_data, decode_xtheadsync },
{ has_XVentanaCondOps_p, ventana_opcode_data, decode_xventanacondops },
};
for (size_t i = 0; i < ARRAY_SIZE(decoders); i++) {
bool (*guard_func)(const RISCVCPUConfig *) = decoders[i].guard_func;
const rv_opcode_data *opcode_data = decoders[i].opcode_data;
void (*decode_func)(rv_decode *, rv_isa) = decoders[i].decode_func;
if (guard_func(cfg)) {
dec.opcode_data = opcode_data;
decode_func(&dec, isa);
if (dec.op != rv_op_illegal)
break;
}
}
if (dec.op == rv_op_illegal) {
dec.opcode_data = rvi_opcode_data;
}
decode_inst_opcode(&dec, isa);
decode_inst_operands(&dec, isa);
decode_inst_decompress(&dec, isa);
decode_inst_lift_pseudo(&dec);

View File

@@ -1,304 +0,0 @@
/*
* QEMU disassembler -- RISC-V specific header.
*
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef DISAS_RISCV_H
#define DISAS_RISCV_H
#include "qemu/osdep.h"
#include "target/riscv/cpu_cfg.h"
/* types */
typedef uint64_t rv_inst;
typedef uint16_t rv_opcode;
/* enums */
typedef enum {
rv32,
rv64,
rv128
} rv_isa;
typedef enum {
rv_rm_rne = 0,
rv_rm_rtz = 1,
rv_rm_rdn = 2,
rv_rm_rup = 3,
rv_rm_rmm = 4,
rv_rm_dyn = 7,
} rv_rm;
typedef enum {
rv_fence_i = 8,
rv_fence_o = 4,
rv_fence_r = 2,
rv_fence_w = 1,
} rv_fence;
typedef enum {
rv_ireg_zero,
rv_ireg_ra,
rv_ireg_sp,
rv_ireg_gp,
rv_ireg_tp,
rv_ireg_t0,
rv_ireg_t1,
rv_ireg_t2,
rv_ireg_s0,
rv_ireg_s1,
rv_ireg_a0,
rv_ireg_a1,
rv_ireg_a2,
rv_ireg_a3,
rv_ireg_a4,
rv_ireg_a5,
rv_ireg_a6,
rv_ireg_a7,
rv_ireg_s2,
rv_ireg_s3,
rv_ireg_s4,
rv_ireg_s5,
rv_ireg_s6,
rv_ireg_s7,
rv_ireg_s8,
rv_ireg_s9,
rv_ireg_s10,
rv_ireg_s11,
rv_ireg_t3,
rv_ireg_t4,
rv_ireg_t5,
rv_ireg_t6,
} rv_ireg;
typedef enum {
rvc_end,
rvc_rd_eq_ra,
rvc_rd_eq_x0,
rvc_rs1_eq_x0,
rvc_rs2_eq_x0,
rvc_rs2_eq_rs1,
rvc_rs1_eq_ra,
rvc_imm_eq_zero,
rvc_imm_eq_n1,
rvc_imm_eq_p1,
rvc_csr_eq_0x001,
rvc_csr_eq_0x002,
rvc_csr_eq_0x003,
rvc_csr_eq_0xc00,
rvc_csr_eq_0xc01,
rvc_csr_eq_0xc02,
rvc_csr_eq_0xc80,
rvc_csr_eq_0xc81,
rvc_csr_eq_0xc82,
} rvc_constraint;
typedef enum {
rv_codec_illegal,
rv_codec_none,
rv_codec_u,
rv_codec_uj,
rv_codec_i,
rv_codec_i_sh5,
rv_codec_i_sh6,
rv_codec_i_sh7,
rv_codec_i_csr,
rv_codec_s,
rv_codec_sb,
rv_codec_r,
rv_codec_r_m,
rv_codec_r4_m,
rv_codec_r_a,
rv_codec_r_l,
rv_codec_r_f,
rv_codec_cb,
rv_codec_cb_imm,
rv_codec_cb_sh5,
rv_codec_cb_sh6,
rv_codec_ci,
rv_codec_ci_sh5,
rv_codec_ci_sh6,
rv_codec_ci_16sp,
rv_codec_ci_lwsp,
rv_codec_ci_ldsp,
rv_codec_ci_lqsp,
rv_codec_ci_li,
rv_codec_ci_lui,
rv_codec_ci_none,
rv_codec_ciw_4spn,
rv_codec_cj,
rv_codec_cj_jal,
rv_codec_cl_lw,
rv_codec_cl_ld,
rv_codec_cl_lq,
rv_codec_cr,
rv_codec_cr_mv,
rv_codec_cr_jalr,
rv_codec_cr_jr,
rv_codec_cs,
rv_codec_cs_sw,
rv_codec_cs_sd,
rv_codec_cs_sq,
rv_codec_css_swsp,
rv_codec_css_sdsp,
rv_codec_css_sqsp,
rv_codec_k_bs,
rv_codec_k_rnum,
rv_codec_v_r,
rv_codec_v_ldst,
rv_codec_v_i,
rv_codec_vsetvli,
rv_codec_vsetivli,
rv_codec_zcb_ext,
rv_codec_zcb_mul,
rv_codec_zcb_lb,
rv_codec_zcb_lh,
rv_codec_zcmp_cm_pushpop,
rv_codec_zcmp_cm_mv,
rv_codec_zcmt_jt,
rv_codec_r2_imm5,
rv_codec_r2,
rv_codec_r2_imm6,
rv_codec_r_imm2,
rv_codec_r2_immhl,
rv_codec_r2_imm2_imm5,
rv_codec_fli,
} rv_codec;
/* structures */
typedef struct {
const int op;
const rvc_constraint *constraints;
} rv_comp_data;
typedef struct {
const char * const name;
const rv_codec codec;
const char * const format;
const rv_comp_data *pseudo;
const short decomp_rv32;
const short decomp_rv64;
const short decomp_rv128;
const short decomp_data;
} rv_opcode_data;
typedef struct {
RISCVCPUConfig *cfg;
uint64_t pc;
uint64_t inst;
const rv_opcode_data *opcode_data;
int32_t imm;
int32_t imm1;
uint16_t op;
uint8_t codec;
uint8_t rd;
uint8_t rs1;
uint8_t rs2;
uint8_t rs3;
uint8_t rm;
uint8_t pred;
uint8_t succ;
uint8_t aq;
uint8_t rl;
uint8_t bs;
uint8_t rnum;
uint8_t vm;
uint32_t vzimm;
uint8_t rlist;
} rv_decode;
enum {
rv_op_illegal = 0
};
enum {
rvcd_imm_nz = 0x1
};
/* instruction formats */
#define rv_fmt_none "O\t"
#define rv_fmt_rs1 "O\t1"
#define rv_fmt_offset "O\to"
#define rv_fmt_pred_succ "O\tp,s"
#define rv_fmt_rs1_rs2 "O\t1,2"
#define rv_fmt_rd_imm "O\t0,i"
#define rv_fmt_rd_uimm "O\t0,Ui"
#define rv_fmt_rd_offset "O\t0,o"
#define rv_fmt_rd_uoffset "O\t0,Uo"
#define rv_fmt_rd_rs1_rs2 "O\t0,1,2"
#define rv_fmt_frd_rs1 "O\t3,1"
#define rv_fmt_frd_rs1_rs2 "O\t3,1,2"
#define rv_fmt_frd_frs1 "O\t3,4"
#define rv_fmt_rd_frs1 "O\t0,4"
#define rv_fmt_rd_frs1_frs2 "O\t0,4,5"
#define rv_fmt_frd_frs1_frs2 "O\t3,4,5"
#define rv_fmt_rm_frd_frs1 "O\tr,3,4"
#define rv_fmt_rm_frd_rs1 "O\tr,3,1"
#define rv_fmt_rm_rd_frs1 "O\tr,0,4"
#define rv_fmt_rm_frd_frs1_frs2 "O\tr,3,4,5"
#define rv_fmt_rm_frd_frs1_frs2_frs3 "O\tr,3,4,5,6"
#define rv_fmt_rd_rs1_imm "O\t0,1,i"
#define rv_fmt_rd_rs1_offset "O\t0,1,i"
#define rv_fmt_rd_offset_rs1 "O\t0,i(1)"
#define rv_fmt_frd_offset_rs1 "O\t3,i(1)"
#define rv_fmt_rd_csr_rs1 "O\t0,c,1"
#define rv_fmt_rd_csr_zimm "O\t0,c,7"
#define rv_fmt_rs2_offset_rs1 "O\t2,i(1)"
#define rv_fmt_frs2_offset_rs1 "O\t5,i(1)"
#define rv_fmt_rs1_rs2_offset "O\t1,2,o"
#define rv_fmt_rs2_rs1_offset "O\t2,1,o"
#define rv_fmt_aqrl_rd_rs2_rs1 "OAR\t0,2,(1)"
#define rv_fmt_aqrl_rd_rs1 "OAR\t0,(1)"
#define rv_fmt_rd "O\t0"
#define rv_fmt_rd_zimm "O\t0,7"
#define rv_fmt_rd_rs1 "O\t0,1"
#define rv_fmt_rd_rs2 "O\t0,2"
#define rv_fmt_rs1_offset "O\t1,o"
#define rv_fmt_rs2_offset "O\t2,o"
#define rv_fmt_rs1_rs2_bs "O\t1,2,b"
#define rv_fmt_rd_rs1_rnum "O\t0,1,n"
#define rv_fmt_ldst_vd_rs1_vm "O\tD,(1)m"
#define rv_fmt_ldst_vd_rs1_rs2_vm "O\tD,(1),2m"
#define rv_fmt_ldst_vd_rs1_vs2_vm "O\tD,(1),Fm"
#define rv_fmt_vd_vs2_vs1 "O\tD,F,E"
#define rv_fmt_vd_vs2_vs1_vl "O\tD,F,El"
#define rv_fmt_vd_vs2_vs1_vm "O\tD,F,Em"
#define rv_fmt_vd_vs2_rs1_vl "O\tD,F,1l"
#define rv_fmt_vd_vs2_fs1_vl "O\tD,F,4l"
#define rv_fmt_vd_vs2_rs1_vm "O\tD,F,1m"
#define rv_fmt_vd_vs2_fs1_vm "O\tD,F,4m"
#define rv_fmt_vd_vs2_imm_vl "O\tD,F,il"
#define rv_fmt_vd_vs2_imm_vm "O\tD,F,im"
#define rv_fmt_vd_vs2_uimm_vm "O\tD,F,um"
#define rv_fmt_vd_vs1_vs2_vm "O\tD,E,Fm"
#define rv_fmt_vd_rs1_vs2_vm "O\tD,1,Fm"
#define rv_fmt_vd_fs1_vs2_vm "O\tD,4,Fm"
#define rv_fmt_vd_vs1 "O\tD,E"
#define rv_fmt_vd_rs1 "O\tD,1"
#define rv_fmt_vd_fs1 "O\tD,4"
#define rv_fmt_vd_imm "O\tD,i"
#define rv_fmt_vd_vs2 "O\tD,F"
#define rv_fmt_vd_vs2_vm "O\tD,Fm"
#define rv_fmt_rd_vs2_vm "O\t0,Fm"
#define rv_fmt_rd_vs2 "O\t0,F"
#define rv_fmt_fd_vs2 "O\t3,F"
#define rv_fmt_vd_vm "O\tDm"
#define rv_fmt_vsetvli "O\t0,1,v"
#define rv_fmt_vsetivli "O\t0,u,v"
#define rv_fmt_rs1_rs2_zce_ldst "O\t2,i(1)"
#define rv_fmt_push_rlist "O\tx,-i"
#define rv_fmt_pop_rlist "O\tx,i"
#define rv_fmt_zcmt_index "O\ti"
#define rv_fmt_rd_rs1_rs2_imm "O\t0,1,2,i"
#define rv_fmt_frd_rs1_rs2_imm "O\t3,1,2,i"
#define rv_fmt_rd_rs1_immh_imml "O\t0,1,i,j"
#define rv_fmt_rd_rs1_immh_imml_addr "O\t0,(1),i,j"
#define rv_fmt_rd2_imm "O\t0,2,(1),i"
#define rv_fmt_fli "O\t3,h"
#endif /* DISAS_RISCV_H */

View File

@@ -116,11 +116,6 @@ Use "whpx" (on Windows) or "hvf" (on macOS) instead.
Use ``-run-with async-teardown=on`` instead.
``-chroot`` (since 8.1)
'''''''''''''''''''''''
Use ``-run-with chroot=dir`` instead.
``-singlestep`` (since 8.1)
'''''''''''''''''''''''''''
@@ -348,29 +343,6 @@ the addition of volatile memory support, it is now necessary to distinguish
between persistent and volatile memory backends. As such, memdev is deprecated
in favor of persistent-memdev.
``-fsdev proxy`` and ``-virtfs proxy`` (since 8.1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The 9p ``proxy`` filesystem backend driver has been deprecated and will be
removed (along with its proxy helper daemon) in a future version of QEMU. Please
use ``-fsdev local`` or ``-virtfs local`` for using the 9p ``local`` filesystem
backend, or alternatively consider deploying virtiofsd instead.
The 9p ``proxy`` backend was originally developed as an alternative to the 9p
``local`` backend. The idea was to enhance security by dispatching actual low
level filesystem operations from 9p server (QEMU process) over to a separate
process (the virtfs-proxy-helper binary). However this alternative never gained
momentum. The proxy backend is much slower than the local backend, hasn't seen
any development in years, and showed to be less secure, especially due to the
fact that its helper daemon must be run as root, whereas with the local backend
QEMU is typically run as unprivileged user and allows to tighten behaviour by
mapping permissions et al by using its 'mapped' security model option.
Nowadays it would make sense to reimplement the ``proxy`` backend by using
QEMU's ``vhost`` feature, which would eliminate the high latency costs under
which the 9p ``proxy`` backend currently suffers. However as of to date nobody
has indicated plans for such kind of reimplementation unfortunately.
Block device options
''''''''''''''''''''
@@ -451,13 +423,3 @@ both, older and future versions of QEMU.
The ``blacklist`` config file option has been renamed to ``block-rpcs``
(to be in sync with the renaming of the corresponding command line
option).
Migration
---------
``skipped`` MigrationStats field (since 8.1)
''''''''''''''''''''''''''''''''''''''''''''
``skipped`` field in Migration stats has been deprecated. It hasn't
been used for more than 10 years.

View File

@@ -8,4 +8,4 @@ QEMU is a trademark of Fabrice Bellard.
QEMU is released under the `GNU General Public
License <https://www.gnu.org/licenses/gpl-2.0.txt>`__, version 2. Parts
of QEMU have specific licenses, see file
`LICENSE <https://gitlab.com/qemu-project/qemu/-/raw/master/LICENSE>`__.
`LICENSE <https://git.qemu.org/?p=qemu.git;a=blob_plain;f=LICENSE>`__.

View File

@@ -11,7 +11,5 @@ generated from in-code annotations to function prototypes.
loads-stores
memory
modules
qom-api
qdev-api
ui
zoned-storage

View File

@@ -1,5 +1,3 @@
.. _development_process:
QEMU Community Processes
------------------------

View File

@@ -1,5 +1,3 @@
.. _tcg:
TCG Emulation
-------------

View File

@@ -2,30 +2,10 @@
Developer Information
---------------------
This section of the manual documents various parts of the internals of
QEMU. You only need to read it if you are interested in reading or
This section of the manual documents various parts of the internals of QEMU.
You only need to read it if you are interested in reading or
modifying QEMU's source code.
QEMU is a large and mature project with a number of complex subsystems
that can be overwhelming to understand. The development documentation
is not comprehensive but hopefully presents enough to get you started.
If there are areas that are unclear please reach out either via the
IRC channel or mailing list and hopefully we can improve the
documentation for future developers.
All developers will want to familiarise themselves with
:ref:`development_process` and how the community interacts. Please pay
particular attention to the :ref:`coding-style` and
:ref:`submitting-a-patch` sections to avoid common pitfalls.
If you wish to implement a new hardware model you will want to read
through the :ref:`qom` documentation to understand how QEMU's object
model works.
Those wishing to enhance or add new CPU emulation capabilities will
want to read our :ref:`tcg` documentation, especially the overview of
the :ref:`tcg_internals`.
.. toctree::
:maxdepth: 1

View File

@@ -594,7 +594,8 @@ Postcopy
'Postcopy' migration is a way to deal with migrations that refuse to converge
(or take too long to converge) its plus side is that there is an upper bound on
the amount of migration traffic and time it takes, the down side is that during
the postcopy phase, a failure of *either* side causes the guest to be lost.
the postcopy phase, a failure of *either* side or the network connection causes
the guest to be lost.
In postcopy the destination CPUs are started before all the memory has been
transferred, and accesses to pages that are yet to be transferred cause
@@ -720,42 +721,6 @@ processing.
is no longer used by migration, while the listen thread carries on servicing
page data until the end of migration.
Postcopy Recovery
-----------------
Comparing to precopy, postcopy is special on error handlings. When any
error happens (in this case, mostly network errors), QEMU cannot easily
fail a migration because VM data resides in both source and destination
QEMU instances. On the other hand, when issue happens QEMU on both sides
will go into a paused state. It'll need a recovery phase to continue a
paused postcopy migration.
The recovery phase normally contains a few steps:
- When network issue occurs, both QEMU will go into PAUSED state
- When the network is recovered (or a new network is provided), the admin
can setup the new channel for migration using QMP command
'migrate-recover' on destination node, preparing for a resume.
- On source host, the admin can continue the interrupted postcopy
migration using QMP command 'migrate' with resume=true flag set.
- After the connection is re-established, QEMU will continue the postcopy
migration on both sides.
During a paused postcopy migration, the VM can logically still continue
running, and it will not be impacted from any page access to pages that
were already migrated to destination VM before the interruption happens.
However, if any of the missing pages got accessed on destination VM, the VM
thread will be halted waiting for the page to be migrated, it means it can
be halted until the recovery is complete.
The impact of accessing missing pages can be relevant to different
configurations of the guest. For example, when with async page fault
enabled, logically the guest can proactively schedule out the threads
accessing missing pages.
Postcopy states
---------------
@@ -800,31 +765,36 @@ ADVISE->DISCARD->LISTEN->RUNNING->END
(although it can't do the cleanup it would do as it
finishes a normal migration).
- Paused
Postcopy can run into a paused state (normally on both sides when
happens), where all threads will be temporarily halted mostly due to
network errors. When reaching paused state, migration will make sure
the qemu binary on both sides maintain the data without corrupting
the VM. To continue the migration, the admin needs to fix the
migration channel using the QMP command 'migrate-recover' on the
destination node, then resume the migration using QMP command 'migrate'
again on source node, with resume=true flag set.
- End
The listen thread can now quit, and perform the cleanup of migration
state, the migration is now complete.
Source side page map
--------------------
Source side page maps
---------------------
The 'migration bitmap' in postcopy is basically the same as in the precopy,
where each of the bit to indicate that page is 'dirty' - i.e. needs
sending. During the precopy phase this is updated as the CPU dirties
pages, however during postcopy the CPUs are stopped and nothing should
dirty anything any more. Instead, dirty bits are cleared when the relevant
pages are sent during postcopy.
The source side keeps two bitmaps during postcopy; 'the migration bitmap'
and 'unsent map'. The 'migration bitmap' is basically the same as in
the precopy case, and holds a bit to indicate that page is 'dirty' -
i.e. needs sending. During the precopy phase this is updated as the CPU
dirties pages, however during postcopy the CPUs are stopped and nothing
should dirty anything any more.
The 'unsent map' is used for the transition to postcopy. It is a bitmap that
has a bit cleared whenever a page is sent to the destination, however during
the transition to postcopy mode it is combined with the migration bitmap
to form a set of pages that:
a) Have been sent but then redirtied (which must be discarded)
b) Have not yet been sent - which also must be discarded to cause any
transparent huge pages built during precopy to be broken.
Note that the contents of the unsentmap are sacrificed during the calculation
of the discard set and thus aren't valid once in postcopy. The dirtymap
is still valid and is used to ensure that no page is sent more than once. Any
request for a page that has already been sent is ignored. Duplicate requests
such as this can happen as a page is sent at about the same time the
destination accesses it.
Postcopy with hugepages
-----------------------
@@ -883,16 +853,6 @@ Retro-fitting postcopy to existing clients is possible:
guest memory access is made while holding a lock then all other
threads waiting for that lock will also be blocked.
Postcopy Preemption Mode
------------------------
Postcopy preempt is a new capability introduced in 8.0 QEMU release, it
allows urgent pages (those got page fault requested from destination QEMU
explicitly) to be sent in a separate preempt channel, rather than queued in
the background migration channel. Anyone who cares about latencies of page
faults during a postcopy migration should enable this feature. By default,
it's not enabled.
Firmware
========

View File

@@ -1,7 +0,0 @@
.. _qdev-api:
================================
QEMU Device (qdev) API Reference
================================
.. kernel-doc:: include/hw/qdev-core.h

View File

@@ -1,9 +0,0 @@
.. _qom-api:
=====================================
QEMU Object Model (QOM) API Reference
=====================================
This is the complete API documentation for :ref:`qom`.
.. kernel-doc:: include/qom/object.h

View File

@@ -13,24 +13,6 @@ features:
- System for dynamically registering types
- Support for single-inheritance of types
- Multiple inheritance of stateless interfaces
- Mapping internal members to publicly exposed properties
The root object class is TYPE_OBJECT which provides for the basic
object methods.
The QOM tree
============
The QOM tree is a composition tree which represents all of the objects
that make up a QEMU "machine". You can view this tree by running
``info qom-tree`` in the :ref:`QEMU monitor`. It will contain both
objects created by the machine itself as well those created due to
user configuration.
Creating a QOM class
====================
A simple minimal device implementation may look something like below:
.. code-block:: c
:caption: Creating a minimal type
@@ -44,7 +26,7 @@ A simple minimal device implementation may look something like below:
typedef DeviceClass MyDeviceClass;
typedef struct MyDevice
{
DeviceState parent_obj;
DeviceState parent;
int reg0, reg1, reg2;
} MyDevice;
@@ -66,12 +48,6 @@ In the above example, we create a simple type that is described by #TypeInfo.
#TypeInfo describes information about the type including what it inherits
from, the instance and class size, and constructor/destructor hooks.
The TYPE_DEVICE class is the parent class for all modern devices
implemented in QEMU and adds some specific methods to handle QEMU
device model. This includes managing the lifetime of devices from
creation through to when they become visible to the guest and
eventually unrealized.
Alternatively several static types could be registered using helper macro
DEFINE_TYPES()
@@ -122,7 +98,7 @@ when the object is needed.
module_obj(TYPE_MY_DEVICE);
Class Initialization
--------------------
====================
Before an object is initialized, the class for the object must be
initialized. There is only one class object for all instance objects
@@ -171,7 +147,7 @@ will also have a wrapper function to call it easily:
typedef struct MyDeviceClass
{
DeviceClass parent_class;
DeviceClass parent;
void (*frobnicate) (MyDevice *obj);
} MyDeviceClass;
@@ -192,7 +168,7 @@ will also have a wrapper function to call it easily:
}
Interfaces
----------
==========
Interfaces allow a limited form of multiple inheritance. Instances are
similar to normal types except for the fact that are only defined by
@@ -206,7 +182,7 @@ an argument to a method on its corresponding SomethingIfClass, or to
dynamically cast it to an object that implements the interface.
Methods
-------
=======
A *method* is a function within the namespace scope of
a class. It usually operates on the object instance by passing it as a
@@ -299,8 +275,8 @@ Alternatively, object_class_by_name() can be used to obtain the class and
its non-overridden methods for a specific type. This would correspond to
``MyClass::method(...)`` in C++.
One example of such methods is ``DeviceClass.reset``. More examples
can be found at :ref:`device-life-cycle`.
The first example of such a QOM method was #CPUClass.reset,
another example is #DeviceClass.realize.
Standard type declaration and definition macros
===============================================
@@ -406,32 +382,9 @@ OBJECT_DEFINE_ABSTRACT_TYPE() macro can be used instead:
OBJECT_DEFINE_ABSTRACT_TYPE(MyDevice, my_device,
MY_DEVICE, DEVICE)
.. _device-life-cycle:
Device Life-cycle
=================
As class initialisation cannot fail devices have an two additional
methods to handle the creation of dynamic devices. The ``realize``
function is called with ``Error **`` pointer which should be set if
the device cannot complete its setup. Otherwise on successful
completion of the ``realize`` method the device object is added to the
QOM tree and made visible to the guest.
The reverse function is ``unrealize`` and should be were clean-up
code lives to tidy up after the system is done with the device.
All devices can be instantiated by C code, however only some can
created dynamically via the command line or monitor.
Likewise only some can be unplugged after creation and need an
explicit ``unrealize`` implementation. This is determined by the
``user_creatable`` variable in the root ``DeviceClass`` structure.
Devices can only be unplugged if their ``parent_bus`` has a registered
``HotplugHandler``.
API Reference
=============
-------------
See the :ref:`QOM API<qom-api>` and :ref:`QDEV API<qdev-api>`
documents for the complete API description.
.. kernel-doc:: include/qom/object.h

Some files were not shown because too many files have changed in this diff Show More