Compare commits

..

16 Commits

Author SHA1 Message Date
Fabiano Rosas
a0efb212fb migration: Add completion tracepoint
Add a completion tracepoint that provides basic stats for
debug. Displays throughput (MB/s and pages/s), total time (ms), total
transferred (B).

Usage:
  $QEMU ... -trace migration_status

Output:
  migration_status 1506 MB/s, 436725 pages/s, 8698 ms 13093280510 B

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-06 18:08:14 -03:00
Nikolay Borisov
9e086f2e2a tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.com>
2023-03-06 18:01:25 -03:00
Nikolay Borisov
cc57d6c75f migration: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for 'fixed-ram'
capability. One of the more notable changes in behavior is that in the
'fixed-ram' case ram pages are restored in one go rather than constantly
looping through the migration stream. Also due to idiosyncrasies of the
format I have added the 'ram_migrated' since it was easier to simply
return directly from ->load_state rather than introducing more
conditionals around the code to prevent ->load_state being called
multiple times (from
qemu_loadvm_section_start_full/qemu_loadvm_section_part_end i.e. from
multiple QEMU_VM_SECTION_(PART|END) flags).

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 17:58:44 -03:00
Nikolay Borisov
c204cfeeb7 migration: Refactor precopy ram loading code
To facilitate easier implementaiton of the 'fixed-ram' migration restore
factor out the code responsible for parsing the ramblocks headers. This
also makes ram_load_precopy easier to comprehend.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 17:53:51 -03:00
Nikolay Borisov
d5cd94bbaf migration/ram: Introduce 'fixed-ram' migration stream capability
Implement 'fixed-ram' feature. The core of the feature is to ensure that
each ram page of the migration stream has a specific offset in the
resulting migration stream. The reason why we'd want such behavior are
two fold:

 - When doing a 'fixed-ram' migration the resulting file will have a
   bounded size, since pages which are dirtied multiple times will
   always go to a fixed location in the file, rather than constantly
   being added to a sequential stream. This eliminates cases where a vm
   with, say, 1g of ram can result in a migration file that's 10s of
   Gbs, provided that the workload constantly redirties memory.

 - It paves the way to implement DIO-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

The features requires changing the format. First, a bitmap is introduced
which tracks which pages have been written (i.e are dirtied) during
migration and subsequently it's being written in the resultin file,
again at a fixed location for every ramblock. Zero pages are ignored as
they'd be zero in the destination migration as well. With the changed
format data would look like the following:

|name len|name|used_len|pc*|bitmap_size|pages_offset|bitmap|pages|

* pc - refers to the page_size/mr->addr members, so newly added members
begin from "bitmap_size".

This layout is initialized during ram_save_setup so instead of having a
sequential stream of pages that follow the ramblock headers the dirty
pages for a ramblock follow its header. Since all pages have a fixed
location RAM_SAVE_FLAG_EOS is no longer generated on every migration
iteration but there is effectively a single RAM_SAVE_FLAG_EOS right at
the end.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 17:50:31 -03:00
Fabiano Rosas
4c2f145632 fixup! io: Add generic pwritev/preadv interface 2023-03-06 17:46:00 -03:00
Fabiano Rosas
fb56e2387b migration: Add qemu_get/set_buffer_at
Restoring a 'fixed-ram' enabled migration stream would require reading
from specific offsets in the file so add a helper to QEMUFile that uses
the newly introduced qio_channel_file_preadv.

Signed-off-by: Fabiano Rosas <farosas@suse.com>
---
restructured to use qio_channel_file_preadv instead of the _full
variant
2023-03-06 17:43:23 -03:00
Nikolay Borisov
5686566113 migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
(farosas) fixed total_transferred accounting
2023-03-06 16:37:35 -03:00
Nikolay Borisov
4c16cca0e1 io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-06 16:34:04 -03:00
Nikolay Borisov
895c40ce0a io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-06 16:26:01 -03:00
Nikolay Borisov
cad3bc2be3 io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-06 16:07:38 -03:00
Nikolay Borisov
ea1efd3d17 migration: Initial support of fixed-ram feature for analyze-migration.py
In order to allow analyze-migration.py script to work with migration
streams that have the 'fixed-ram' capability set it's required to have
access to the stream's configuration object. This commit enables this
by making migration json writer part of MigrationState struct, allowing
the configuration object be serialized to json.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 16:07:38 -03:00
Nikolay Borisov
3ce4e2e8d5 tests/qtest: migration-test: Add tests for file-based migration
Add basic tests for file-based migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
(farosas) fix segfault when connect_uri is not set
2023-03-06 16:07:38 -03:00
Nikolay Borisov
c1eef47d79 tests/qtest: migration: Add migrate_incoming_qmp helper
file-based migration requires the target to initiate its migration after
the source has finished writing out the data in the file. Currently
there's no easy way to initiate 'migrate-incoming', allow this by
introducing migrate_incoming_qmp helper, similarly to migrate_qmp.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.com>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-06 16:07:38 -03:00
Nikolay Borisov
80f68f67a9 migration: Add support for 'file:' uri for incoming migration
This is a counterpart to the 'file:' uri support for source migration,
now a file can also serve as the source of an incoming migration.

Unlike other migration protocol backends, the 'file' protocol cannot
honour non-blocking mode. POSIX file/block storage will always report
ready to read/write, regardless of how slow the underlying storage
will be at servicing the request.

For incoming migration this limitation may result in the main event
loop not being fully responsive while loading the VM state. This
won't impact the VM since it is not running at this phase, however,
it may impact management applications.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 15:20:04 -03:00
Nikolay Borisov
4ed18df9df migration: support file: uri for source migration
Implement support for a "file:" uri so that a migration can be initiated
directly to a file from QEMU.

Unlike other migration protocol backends, the 'file' protocol cannot
honour non-blocking mode. POSIX file/block storage will always report
ready to read/write, regardless of how slow the underlying storage
will be at servicing the request.

For outgoing migration this limitation is not a serious problem as
the migration data transfer always happens in a dedicated thread.
It may, however, result in delays in honouring a request to cancel
the migration operation.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
2023-03-06 15:20:04 -03:00
821 changed files with 15103 additions and 34783 deletions

111
.cirrus.yml Normal file
View File

@@ -0,0 +1,111 @@
env:
CIRRUS_CLONE_DEPTH: 1
windows_msys2_task:
timeout_in: 90m
windows_container:
image: cirrusci/windowsservercore:2019
os_version: 2019
cpu: 8
memory: 8G
env:
CIRRUS_SHELL: powershell
MSYS: winsymlinks:native
MSYSTEM: MINGW64
MSYS2_URL: https://github.com/msys2/msys2-installer/releases/download/2022-06-03/msys2-base-x86_64-20220603.sfx.exe
MSYS2_FINGERPRINT: 0
MSYS2_PACKAGES: "
diffutils git grep make pkg-config sed
mingw-w64-x86_64-python
mingw-w64-x86_64-python-sphinx
mingw-w64-x86_64-toolchain
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-gtk3
mingw-w64-x86_64-glib2
mingw-w64-x86_64-ninja
mingw-w64-x86_64-jemalloc
mingw-w64-x86_64-lzo2
mingw-w64-x86_64-zstd
mingw-w64-x86_64-libjpeg-turbo
mingw-w64-x86_64-pixman
mingw-w64-x86_64-libgcrypt
mingw-w64-x86_64-libpng
mingw-w64-x86_64-libssh
mingw-w64-x86_64-snappy
mingw-w64-x86_64-libusb
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-libtasn1
mingw-w64-x86_64-nettle
mingw-w64-x86_64-cyrus-sasl
mingw-w64-x86_64-curl
mingw-w64-x86_64-gnutls
mingw-w64-x86_64-libnfs
"
CHERE_INVOKING: 1
msys2_cache:
folder: C:\tools\archive
reupload_on_changes: false
# These env variables are used to generate fingerprint to trigger the cache procedure
# If wanna to force re-populate msys2, increase MSYS2_FINGERPRINT
fingerprint_script:
- |
echo $env:CIRRUS_TASK_NAME
echo $env:MSYS2_URL
echo $env:MSYS2_FINGERPRINT
echo $env:MSYS2_PACKAGES
populate_script:
- |
md -Force C:\tools\archive\pkg
$start_time = Get-Date
bitsadmin /transfer msys_download /dynamic /download /priority FOREGROUND $env:MSYS2_URL C:\tools\archive\base.exe
Write-Output "Download time taken: $((Get-Date).Subtract($start_time))"
cd C:\tools
C:\tools\archive\base.exe -y
del -Force C:\tools\archive\base.exe
Write-Output "Base install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
((Get-Content -path C:\tools\msys64\etc\\post-install\\07-pacman-key.post -Raw) -replace '--refresh-keys', '--version') | Set-Content -Path C:\tools\msys64\etc\\post-install\\07-pacman-key.post
C:\tools\msys64\usr\bin\bash.exe -lc "sed -i 's/^CheckSpace/#CheckSpace/g' /etc/pacman.conf"
C:\tools\msys64\usr\bin\bash.exe -lc "export"
C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Sy
echo Y | C:\tools\msys64\usr\bin\pacman.exe --noconfirm -Suu --overwrite=*
taskkill /F /FI "MODULES eq msys-2.0.dll"
tasklist
C:\tools\msys64\usr\bin\bash.exe -lc "mv -f /etc/pacman.conf.pacnew /etc/pacman.conf || true"
C:\tools\msys64\usr\bin\bash.exe -lc "pacman --noconfirm -Syuu --overwrite=*"
Write-Output "Core install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
C:\tools\msys64\usr\bin\bash.exe -lc "pacman --noconfirm -S --needed $env:MSYS2_PACKAGES"
Write-Output "Package install time taken: $((Get-Date).Subtract($start_time))"
$start_time = Get-Date
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\etc\mtab
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\fd
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stderr
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stdin
del -Force -ErrorAction SilentlyContinue C:\tools\msys64\dev\stdout
del -Force -Recurse -ErrorAction SilentlyContinue C:\tools\msys64\var\cache\pacman\pkg
tar cf C:\tools\archive\msys64.tar -C C:\tools\ msys64
Write-Output "Package archive time taken: $((Get-Date).Subtract($start_time))"
del -Force -Recurse -ErrorAction SilentlyContinue c:\tools\msys64
install_script:
- |
$start_time = Get-Date
cd C:\tools
ls C:\tools\archive\msys64.tar
tar xf C:\tools\archive\msys64.tar
Write-Output "Extract msys2 time taken: $((Get-Date).Subtract($start_time))"
script:
- mkdir build
- cd build
- C:\tools\msys64\usr\bin\bash.exe -lc "../configure --python=python3
--target-list-exclude=i386-softmmu,ppc64-softmmu,aarch64-softmmu,mips64-softmmu,mipsel-softmmu,sh4-softmmu"
- C:\tools\msys64\usr\bin\bash.exe -lc "make -j8"
- exit $LastExitCode
test_script:
- C:\tools\msys64\usr\bin\bash.exe -lc "cd build && make V=1 check"
- exit $LastExitCode

View File

@@ -9,7 +9,8 @@ build-system-alpine:
- job: amd64-alpine-container
variables:
IMAGE: alpine
TARGETS: avr-softmmu loongarch64-softmmu mips64-softmmu mipsel-softmmu
TARGETS: aarch64-softmmu alpha-softmmu cris-softmmu hppa-softmmu
microblazeel-softmmu mips64el-softmmu
MAKE_CHECK_ARGS: check-build
CONFIGURE_ARGS: --enable-docs --enable-trace-backends=log,simple,syslog
@@ -71,8 +72,8 @@ build-system-debian:
variables:
IMAGE: debian-amd64
CONFIGURE_ARGS: --with-coroutine=sigaltstack
TARGETS: arm-softmmu i386-softmmu riscv64-softmmu sh4eb-softmmu
sparc-softmmu xtensaeb-softmmu
TARGETS: arm-softmmu avr-softmmu i386-softmmu mipsel-softmmu
riscv64-softmmu sh4eb-softmmu sparc-softmmu xtensaeb-softmmu
MAKE_CHECK_ARGS: check-build
check-system-debian:

View File

@@ -1,6 +1,13 @@
include:
- local: '/.gitlab-ci.d/crossbuild-template.yml'
cross-armel-system:
extends: .cross_system_build_job
needs:
job: armel-debian-cross-container
variables:
IMAGE: debian-armel-cross
cross-armel-user:
extends: .cross_user_build_job
needs:
@@ -8,6 +15,13 @@ cross-armel-user:
variables:
IMAGE: debian-armel-cross
cross-armhf-system:
extends: .cross_system_build_job
needs:
job: armhf-debian-cross-container
variables:
IMAGE: debian-armhf-cross
cross-armhf-user:
extends: .cross_user_build_job
needs:
@@ -29,6 +43,16 @@ cross-arm64-user:
variables:
IMAGE: debian-arm64-cross
cross-i386-system:
extends:
- .cross_system_build_job
- .cross_test_artifacts
needs:
job: i386-fedora-cross-container
variables:
IMAGE: fedora-i386-cross
MAKE_CHECK_ARGS: check-qtest
cross-i386-user:
extends:
- .cross_user_build_job

View File

@@ -1,9 +1,4 @@
# All centos-stream-8 jobs should run successfully in an environment
# setup by the scripts/ci/setup/stream/8/build-environment.yml task
# "Installation of extra packages to build QEMU"
centos-stream-8-x86_64:
extends: .custom_runner_template
allow_failure: true
needs: []
stage: build
@@ -13,6 +8,15 @@ centos-stream-8-x86_64:
rules:
- if: '$CI_PROJECT_NAMESPACE == "qemu-project" && $CI_COMMIT_BRANCH =~ /^staging/'
- if: "$CENTOS_STREAM_8_x86_64_RUNNER_AVAILABLE"
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
when: on_failure
expire_in: 7 days
paths:
- build/tests/results/latest/results.xml
- build/tests/results/latest/test-results
reports:
junit: build/tests/results/latest/results.xml
before_script:
- JOBS=$(expr $(nproc) + 1)
script:
@@ -21,4 +25,6 @@ centos-stream-8-x86_64:
- ../scripts/ci/org.centos/stream/8/x86_64/configure
|| { cat config.log meson-logs/meson-log.txt; exit 1; }
- make -j"$JOBS"
- make NINJA=":" check check-avocado
- make NINJA=":" check
|| { cat meson-logs/testlog.txt; exit 1; } ;
- ../scripts/ci/org.centos/stream/8/x86_64/test-avocado

85
.gitlab-ci.d/edk2.yml Normal file
View File

@@ -0,0 +1,85 @@
# All jobs needing docker-edk2 must use the same rules it uses.
.edk2_job_rules:
rules:
# Forks don't get pipelines unless QEMU_CI=1 or QEMU_CI=2 is set
- if: '$QEMU_CI != "1" && $QEMU_CI != "2" && $CI_PROJECT_NAMESPACE != "qemu-project"'
when: never
# In forks, if QEMU_CI=1 is set, then create manual job
# if any of the files affecting the build are touched
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project"'
changes:
- .gitlab-ci.d/edk2.yml
- .gitlab-ci.d/edk2/Dockerfile
- roms/edk2/*
when: manual
# In forks, if QEMU_CI=1 is set, then create manual job
# if the branch/tag starts with 'edk2'
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_REF_NAME =~ /^edk2/'
when: manual
# In forks, if QEMU_CI=1 is set, then create manual job
# if last commit msg contains 'EDK2' (case insensitive)
- if: '$QEMU_CI == "1" && $CI_PROJECT_NAMESPACE != "qemu-project" && $CI_COMMIT_MESSAGE =~ /edk2/i'
when: manual
# Run if any files affecting the build output are touched
- changes:
- .gitlab-ci.d/edk2.yml
- .gitlab-ci.d/edk2/Dockerfile
- roms/edk2/*
when: on_success
# Run if the branch/tag starts with 'edk2'
- if: '$CI_COMMIT_REF_NAME =~ /^edk2/'
when: on_success
# Run if last commit msg contains 'EDK2' (case insensitive)
- if: '$CI_COMMIT_MESSAGE =~ /edk2/i'
when: on_success
docker-edk2:
extends: .edk2_job_rules
stage: containers
image: docker:19.03.1
services:
- docker:19.03.1-dind
variables:
GIT_DEPTH: 3
IMAGE_TAG: $CI_REGISTRY_IMAGE:edk2-cross-build
# We don't use TLS
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $IMAGE_TAG || true
- docker build --cache-from $IMAGE_TAG --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
--tag $IMAGE_TAG .gitlab-ci.d/edk2
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- docker push $IMAGE_TAG
build-edk2:
extends: .edk2_job_rules
stage: build
needs: ['docker-edk2']
artifacts:
paths: # 'artifacts.zip' will contains the following files:
- pc-bios/edk2*bz2
- pc-bios/edk2-licenses.txt
- edk2-stdout.log
- edk2-stderr.log
image: $CI_REGISTRY_IMAGE:edk2-cross-build
variables:
GIT_DEPTH: 3
script: # Clone the required submodules and build EDK2
- git submodule update --init roms/edk2
- git -C roms/edk2 submodule update --init --
ArmPkg/Library/ArmSoftFloatLib/berkeley-softfloat-3
BaseTools/Source/C/BrotliCompress/brotli
CryptoPkg/Library/OpensslLib/openssl
MdeModulePkg/Library/BrotliCustomDecompressLib/brotli
- export JOBS=$(($(getconf _NPROCESSORS_ONLN) + 1))
- echo "=== Using ${JOBS} simultaneous jobs ==="
- make -j${JOBS} -C roms efi 2>&1 1>edk2-stdout.log | tee -a edk2-stderr.log >&2

View File

@@ -0,0 +1,27 @@
#
# Docker image to cross-compile EDK2 firmware binaries
#
FROM ubuntu:18.04
MAINTAINER Philippe Mathieu-Daudé <f4bug@amsat.org>
# Install packages required to build EDK2
RUN apt update \
&& \
\
DEBIAN_FRONTEND=noninteractive \
apt install --assume-yes --no-install-recommends \
build-essential \
ca-certificates \
dos2unix \
gcc-aarch64-linux-gnu \
gcc-arm-linux-gnueabi \
git \
iasl \
make \
nasm \
python3 \
uuid-dev \
&& \
\
rm -rf /var/lib/apt/lists/*

View File

@@ -42,9 +42,9 @@
docker-opensbi:
extends: .opensbi_job_rules
stage: containers
image: docker:stable
image: docker:19.03.1
services:
- docker:stable-dind
- docker:19.03.1-dind
variables:
GIT_DEPTH: 3
IMAGE_TAG: $CI_REGISTRY_IMAGE:opensbi-cross-build

View File

@@ -15,7 +15,6 @@ RUN apt update \
ca-certificates \
git \
make \
python3 \
wget \
&& \
\

View File

@@ -4,6 +4,7 @@
include:
- local: '/.gitlab-ci.d/base.yml'
- local: '/.gitlab-ci.d/stages.yml'
- local: '/.gitlab-ci.d/edk2.yml'
- local: '/.gitlab-ci.d/opensbi.yml'
- local: '/.gitlab-ci.d/containers.yml'
- local: '/.gitlab-ci.d/crossbuilds.yml'

View File

@@ -59,7 +59,6 @@ msys2-64bit:
mingw-w64-x86_64-SDL2
mingw-w64-x86_64-SDL2_image
mingw-w64-x86_64-snappy
mingw-w64-x86_64-spice
mingw-w64-x86_64-usbredir
mingw-w64-x86_64-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory
@@ -109,7 +108,6 @@ msys2-32bit:
mingw-w64-i686-SDL2
mingw-w64-i686-SDL2_image
mingw-w64-i686-snappy
mingw-w64-i686-spice
mingw-w64-i686-usbredir
mingw-w64-i686-zstd "
- $env:CHERE_INVOKING = 'yes' # Preserve the current working directory

View File

@@ -56,7 +56,6 @@ Aleksandar Rikalo <aleksandar.rikalo@syrmia.com> <aleksandar.rikalo@rt-rk.com>
Alexander Graf <agraf@csgraf.de> <agraf@suse.de>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
Christian Borntraeger <borntraeger@linux.ibm.com> <borntraeger@de.ibm.com>
Damien Hedde <damien.hedde@dahe.fr> <damien.hedde@greensocs.com>
Filip Bozuta <filip.bozuta@syrmia.com> <filip.bozuta@rt-rk.com.com>
Frederic Konrad <konrad.frederic@yahoo.fr> <fred.konrad@greensocs.com>
Frederic Konrad <konrad.frederic@yahoo.fr> <konrad@adacore.com>

View File

@@ -136,8 +136,6 @@ F: docs/devel/decodetree.rst
F: docs/devel/tcg*
F: include/exec/cpu*.h
F: include/exec/exec-all.h
F: include/exec/tb-flush.h
F: include/exec/target_long.h
F: include/exec/helper*.h
F: include/sysemu/cpus.h
F: include/sysemu/tcg.h
@@ -257,9 +255,9 @@ F: docs/system/cpu-models-mips.rst.inc
F: tests/tcg/mips/
NiosII TCG CPUs
R: Chris Wulff <crwulff@gmail.com>
R: Marek Vasut <marex@denx.de>
S: Orphan
M: Chris Wulff <crwulff@gmail.com>
M: Marek Vasut <marex@denx.de>
S: Maintained
F: target/nios2/
F: hw/nios2/
F: disas/nios2.c
@@ -371,7 +369,6 @@ S: Maintained
F: target/xtensa/
F: hw/xtensa/
F: tests/tcg/xtensa/
F: tests/tcg/xtensaeb/
F: disas/xtensa.c
F: include/hw/xtensa/xtensa-isa.h
F: configs/devices/xtensa*/default.mak
@@ -446,15 +443,6 @@ F: target/i386/kvm/
F: target/i386/sev*
F: scripts/kvm/vmxcap
Xen emulation on X86 KVM CPUs
M: David Woodhouse <dwmw2@infradead.org>
M: Paul Durrant <paul@xen.org>
S: Supported
F: include/sysemu/kvm_xen.h
F: target/i386/kvm/xen*
F: hw/i386/kvm/xen*
F: tests/avocado/xen_guest.py
Guest CPU Cores (other accelerators)
------------------------------------
Overall
@@ -1011,6 +999,12 @@ S: Maintained
F: hw/ssi/xlnx-versal-ospi.c
F: include/hw/ssi/xlnx-versal-ospi.h
ARM ACPI Subsystem
M: Shannon Zhao <shannon.zhaosl@gmail.com>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/virt-acpi-build.c
STM32F100
M: Alexandre Iooss <erdnaxe@crans.org>
L: qemu-arm@nongnu.org
@@ -1898,18 +1892,6 @@ F: docs/specs/acpi_nvdimm.rst
F: docs/specs/acpi_pci_hotplug.rst
F: docs/specs/acpi_hw_reduced_hotplug.rst
ARM ACPI Subsystem
M: Shannon Zhao <shannon.zhaosl@gmail.com>
L: qemu-arm@nongnu.org
S: Maintained
F: hw/arm/virt-acpi-build.c
RISC-V ACPI Subsystem
M: Sunil V L <sunilvl@ventanamicro.com>
L: qemu-riscv@nongnu.org
S: Maintained
F: hw/riscv/virt-acpi-build.c
ACPI/VIOT
M: Jean-Philippe Brucker <jean-philippe@linaro.org>
S: Supported
@@ -2237,28 +2219,14 @@ F: docs/specs/rocker.txt
e1000x
M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
R: Akihiko Odaki <akihiko.odaki@daynix.com>
S: Maintained
F: hw/net/e1000x*
e1000e
M: Dmitry Fleytman <dmitry.fleytman@gmail.com>
R: Akihiko Odaki <akihiko.odaki@daynix.com>
S: Maintained
F: hw/net/e1000e*
F: tests/qtest/fuzz-e1000e-test.c
F: tests/qtest/e1000e-test.c
F: tests/qtest/libqos/e1000e.*
igb
M: Akihiko Odaki <akihiko.odaki@daynix.com>
R: Sriram Yagnaraman <sriram.yagnaraman@est.tech>
S: Maintained
F: docs/system/devices/igb.rst
F: hw/net/igb*
F: tests/avocado/igb.py
F: tests/qtest/igb-test.c
F: tests/qtest/libqos/igb.c
eepro100
M: Stefan Weil <sw@weilnetz.de>
@@ -2522,7 +2490,6 @@ Subsystems
----------
Overall Audio backends
M: Gerd Hoffmann <kraxel@redhat.com>
M: Marc-André Lureau <marcandre.lureau@redhat.com>
S: Odd Fixes
F: audio/
X: audio/alsaaudio.c
@@ -2668,6 +2635,7 @@ T: git https://gitlab.com/jsnow/qemu.git jobs
T: git https://gitlab.com/vsementsov/qemu.git block
Compute Express Link
M: Ben Widawsky <ben.widawsky@intel.com>
M: Jonathan Cameron <jonathan.cameron@huawei.com>
R: Fan Ni <fan.ni@samsung.com>
S: Supported
@@ -2767,11 +2735,9 @@ S: Maintained
F: docs/system/gdb.rst
F: gdbstub/*
F: include/exec/gdbstub.h
F: include/gdbstub/*
F: gdb-xml/
F: tests/tcg/multiarch/gdbstub/
F: scripts/feature_to_c.sh
F: scripts/probe-gdb-support.py
Memory API
M: Paolo Bonzini <pbonzini@redhat.com>
@@ -2819,7 +2785,6 @@ F: docs/spice-port-fqdn.txt
Graphics
M: Gerd Hoffmann <kraxel@redhat.com>
M: Marc-André Lureau <marcandre.lureau@redhat.com>
S: Odd Fixes
F: ui/
F: include/ui/
@@ -2903,11 +2868,9 @@ T: git https://gitlab.com/ehabkost/qemu.git machine-next
Cryptodev Backends
M: Gonglei <arei.gonglei@huawei.com>
M: zhenwei pi <pizhenwei@bytedance.com>
S: Maintained
F: include/sysemu/cryptodev*.h
F: backends/cryptodev*.c
F: qapi/cryptodev.json
Python library
M: John Snow <jsnow@redhat.com>
@@ -3345,6 +3308,8 @@ F: roms/edk2
F: roms/edk2-*
F: tests/data/uefi-boot-images/
F: tests/uefi-test-tools/
F: .gitlab-ci.d/edk2.yml
F: .gitlab-ci.d/edk2/
VT-d Emulation
M: Michael S. Tsirkin <mst@redhat.com>
@@ -3364,7 +3329,7 @@ F: .gitlab-ci.d/opensbi/
Clock framework
M: Luc Michel <luc@lmichel.fr>
R: Damien Hedde <damien.hedde@dahe.fr>
R: Damien Hedde <damien.hedde@greensocs.com>
S: Maintained
F: include/hw/clock.h
F: include/hw/qdev-clock.h
@@ -3819,7 +3784,8 @@ W: https://cirrus-ci.com/github/qemu/qemu
Windows Hosted Continuous Integration
M: Yonggang Luo <luoyonggang@gmail.com>
S: Maintained
F: .gitlab-ci.d/windows.yml
F: .cirrus.yml
W: https://cirrus-ci.com/github/qemu/qemu
Guest Test Compilation Support
M: Alex Bennée <alex.bennee@linaro.org>

View File

@@ -1 +1 @@
7.2.92
7.2.50

View File

@@ -27,7 +27,7 @@
#include "qemu/accel.h"
#include "hw/boards.h"
#include "sysemu/cpus.h"
#include "qemu/error-report.h"
#include "accel-softmmu.h"
int accel_init_machine(AccelState *accel, MachineState *ms)

View File

@@ -86,13 +86,6 @@ static bool kvm_cpus_are_resettable(void)
return !kvm_enabled() || kvm_cpu_check_are_resettable();
}
#ifdef KVM_CAP_SET_GUEST_DEBUG
static int kvm_update_guest_debug_ops(CPUState *cpu)
{
return kvm_update_guest_debug(cpu, 0);
}
#endif
static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
{
AccelOpsClass *ops = ACCEL_OPS_CLASS(oc);
@@ -106,7 +99,6 @@ static void kvm_accel_ops_class_init(ObjectClass *oc, void *data)
ops->synchronize_pre_loadvm = kvm_cpu_synchronize_pre_loadvm;
#ifdef KVM_CAP_SET_GUEST_DEBUG
ops->update_guest_debug = kvm_update_guest_debug_ops;
ops->supports_guest_debug = kvm_supports_guest_debug;
ops->insert_breakpoint = kvm_insert_breakpoint;
ops->remove_breakpoint = kvm_remove_breakpoint;

View File

@@ -11,7 +11,6 @@
*/
#include "qemu/osdep.h"
#include "exec/tb-flush.h"
#include "exec/exec-all.h"
void tb_flush(CPUState *cpu)

View File

@@ -21,7 +21,6 @@
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "exec/exec-all.h"
#include "qemu/plugin.h"
bool tcg_allowed;
@@ -66,8 +65,6 @@ void cpu_loop_exit(CPUState *cpu)
{
/* Undo the setting in cpu_tb_exec. */
cpu->can_do_io = 1;
/* Undo any setting in generated code. */
qemu_plugin_disable_mem_helpers(cpu);
siglongjmp(cpu->jmp_env, 1);
}

View File

@@ -459,7 +459,6 @@ cpu_tb_exec(CPUState *cpu, TranslationBlock *itb, int *tb_exit)
qemu_thread_jit_execute();
ret = tcg_qemu_tb_exec(env, tb_ptr);
cpu->can_do_io = 1;
qemu_plugin_disable_mem_helpers(cpu);
/*
* TODO: Delay swapping back to the read-write region of the TB
* until we actually need to modify the TB. The read-only copy,
@@ -527,6 +526,7 @@ static void cpu_exec_exit(CPUState *cpu)
if (cc->tcg_ops->cpu_exec_exit) {
cc->tcg_ops->cpu_exec_exit(cpu);
}
QEMU_PLUGIN_ASSERT(cpu->plugin_mem_cbs == NULL);
}
void cpu_exec_step_atomic(CPUState *cpu)
@@ -580,6 +580,7 @@ void cpu_exec_step_atomic(CPUState *cpu)
qemu_mutex_unlock_iothread();
}
assert_no_pages_locked();
qemu_plugin_disable_mem_helpers(cpu);
}
/*
@@ -1003,6 +1004,7 @@ cpu_exec_loop(CPUState *cpu, SyncClocks *sc)
cpu_loop_exec_tb(cpu, tb, pc, &last_tb, &tb_exit);
QEMU_PLUGIN_ASSERT(cpu->plugin_mem_cbs == NULL);
/* Try to align the host and virtual clocks
if the guest is in advance */
align_clocks(sc, cpu);
@@ -1027,6 +1029,7 @@ static int cpu_exec_setjmp(CPUState *cpu, SyncClocks *sc)
if (qemu_mutex_iothread_locked()) {
qemu_mutex_unlock_iothread();
}
qemu_plugin_disable_mem_helpers(cpu);
assert_no_pages_locked();
}

View File

@@ -44,7 +44,6 @@
*/
#include "qemu/osdep.h"
#include "tcg/tcg.h"
#include "tcg/tcg-temp-internal.h"
#include "tcg/tcg-op.h"
#include "exec/exec-all.h"
#include "exec/plugin-gen.h"

View File

@@ -22,7 +22,6 @@
#include "exec/cputlb.h"
#include "exec/log.h"
#include "exec/exec-all.h"
#include "exec/tb-flush.h"
#include "exec/translate-all.h"
#include "sysemu/tcg.h"
#include "tcg/tcg.h"

View File

@@ -47,7 +47,6 @@
#include "exec/cputlb.h"
#include "exec/translate-all.h"
#include "exec/translator.h"
#include "exec/tb-flush.h"
#include "qemu/bitmap.h"
#include "qemu/qemu-print.h"
#include "qemu/main-loop.h"

View File

@@ -12,7 +12,6 @@
#include "qemu/error-report.h"
#include "qemu/module.h"
#include "qapi/error.h"
#include "hw/xen/xen_native.h"
#include "hw/xen/xen-legacy-backend.h"
#include "hw/xen/xen_pt.h"
#include "chardev/char.h"
@@ -30,12 +29,83 @@ xc_interface *xen_xc;
xenforeignmemory_handle *xen_fmem;
xendevicemodel_handle *xen_dmod;
static void xenstore_record_dm_state(const char *state)
static int store_dev_info(int domid, Chardev *cs, const char *string)
{
struct xs_handle *xs = NULL;
char *path = NULL;
char *newpath = NULL;
char *pts = NULL;
int ret = -1;
/* Only continue if we're talking to a pty. */
if (!CHARDEV_IS_PTY(cs)) {
return 0;
}
pts = cs->filename + 4;
/* We now have everything we need to set the xenstore entry. */
xs = xs_open(0);
if (xs == NULL) {
fprintf(stderr, "Could not contact XenStore\n");
goto out;
}
path = xs_get_domain_path(xs, domid);
if (path == NULL) {
fprintf(stderr, "xs_get_domain_path() error\n");
goto out;
}
newpath = realloc(path, (strlen(path) + strlen(string) +
strlen("/tty") + 1));
if (newpath == NULL) {
fprintf(stderr, "realloc error\n");
goto out;
}
path = newpath;
strcat(path, string);
strcat(path, "/tty");
if (!xs_write(xs, XBT_NULL, path, pts, strlen(pts))) {
fprintf(stderr, "xs_write for '%s' fail", string);
goto out;
}
ret = 0;
out:
free(path);
xs_close(xs);
return ret;
}
void xenstore_store_pv_console_info(int i, Chardev *chr)
{
if (i == 0) {
store_dev_info(xen_domid, chr, "/console");
} else {
char buf[32];
snprintf(buf, sizeof(buf), "/device/console/%d", i);
store_dev_info(xen_domid, chr, buf);
}
}
static void xenstore_record_dm_state(struct xs_handle *xs, const char *state)
{
char path[50];
if (xs == NULL) {
error_report("xenstore connection not initialized");
exit(1);
}
snprintf(path, sizeof (path), "device-model/%u/state", xen_domid);
if (!qemu_xen_xs_write(xenstore, XBT_NULL, path, state, strlen(state))) {
/*
* This call may fail when running restricted so don't make it fatal in
* that case. Toolstacks should instead use QMP to listen for state changes.
*/
if (!xs_write(xs, XBT_NULL, path, state, strlen(state)) &&
!xen_domid_restrict) {
error_report("error recording dm state");
exit(1);
}
@@ -47,7 +117,7 @@ static void xen_change_state_handler(void *opaque, bool running,
{
if (running) {
/* record state running */
xenstore_record_dm_state("running");
xenstore_record_dm_state(xenstore, "running");
}
}
@@ -96,15 +166,7 @@ static int xen_init(MachineState *ms)
xc_interface_close(xen_xc);
return -1;
}
/*
* The XenStore write would fail when running restricted so don't attempt
* it in that case. Toolstacks should instead use QMP to listen for state
* changes.
*/
if (!xen_domid_restrict) {
qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
}
qemu_add_vm_change_state_handler(xen_change_state_handler, NULL);
/*
* opt out of system RAM being allocated by generic code
*/

View File

@@ -222,7 +222,11 @@ static int alsa_poll_helper (snd_pcm_t *handle, struct pollhlp *hlp, int mask)
return -1;
}
pfds = g_new0(struct pollfd, count);
pfds = audio_calloc ("alsa_poll_helper", count, sizeof (*pfds));
if (!pfds) {
dolog ("Could not initialize poll mode\n");
return -1;
}
err = snd_pcm_poll_descriptors (handle, pfds, count);
if (err < 0) {
@@ -913,23 +917,28 @@ static void *alsa_audio_init(Audiodev *dev)
alsa_init_per_direction(aopts->in);
alsa_init_per_direction(aopts->out);
/* don't set has_* so alsa_open can identify it wasn't set by the user */
/*
* need to define them, as otherwise alsa produces no sound
* doesn't set has_* so alsa_open can identify it wasn't set by the user
*/
if (!dev->u.alsa.out->has_period_length) {
/* 256 frames assuming 44100Hz */
dev->u.alsa.out->period_length = 5805;
/* 1024 frames assuming 44100Hz */
dev->u.alsa.out->period_length = 1024 * 1000000 / 44100;
}
if (!dev->u.alsa.out->has_buffer_length) {
/* 4096 frames assuming 44100Hz */
dev->u.alsa.out->buffer_length = 92880;
dev->u.alsa.out->buffer_length = 4096ll * 1000000 / 44100;
}
/*
* OptsVisitor sets unspecified optional fields to zero, but do not depend
* on it...
*/
if (!dev->u.alsa.in->has_period_length) {
/* 256 frames assuming 44100Hz */
dev->u.alsa.in->period_length = 5805;
dev->u.alsa.in->period_length = 0;
}
if (!dev->u.alsa.in->has_buffer_length) {
/* 4096 frames assuming 44100Hz */
dev->u.alsa.in->buffer_length = 92880;
dev->u.alsa.in->buffer_length = 0;
}
return dev;

View File

@@ -33,7 +33,6 @@
#include "qapi/qapi-visit-audio.h"
#include "qapi/qapi-commands-audio.h"
#include "qemu/cutils.h"
#include "qemu/log.h"
#include "qemu/module.h"
#include "qemu/help_option.h"
#include "sysemu/sysemu.h"
@@ -149,6 +148,26 @@ static inline int audio_bits_to_index (int bits)
}
}
void *audio_calloc (const char *funcname, int nmemb, size_t size)
{
int cond;
size_t len;
len = nmemb * size;
cond = !nmemb || !size;
cond |= nmemb < 0;
cond |= len < size;
if (audio_bug ("audio_calloc", cond)) {
AUD_log (NULL, "%s passed invalid arguments to audio_calloc\n",
funcname);
AUD_log (NULL, "nmemb=%d size=%zu (len=%zu)\n", nmemb, size, len);
return NULL;
}
return g_malloc0 (len);
}
void AUD_vlog (const char *cap, const char *fmt, va_list ap)
{
if (cap) {
@@ -381,6 +400,13 @@ void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len)
/*
* Capture
*/
static void noop_conv (struct st_sample *dst, const void *src, int samples)
{
(void) src;
(void) dst;
(void) samples;
}
static CaptureVoiceOut *audio_pcm_capture_find_specific(AudioState *s,
struct audsettings *as)
{
@@ -478,8 +504,15 @@ static int audio_attach_capture (HWVoiceOut *hw)
sw->info = hw->info;
sw->empty = 1;
sw->active = hw->enabled;
sw->conv = noop_conv;
sw->ratio = ((int64_t) hw_cap->info.freq << 32) / sw->info.freq;
sw->vol = nominal_volume;
sw->rate = st_rate_start (sw->info.freq, hw_cap->info.freq);
if (!sw->rate) {
dolog ("Could not start rate conversion for `%s'\n", SW_NAME (sw));
g_free (sw);
return -1;
}
QLIST_INSERT_HEAD (&hw_cap->sw_head, sw, entries);
QLIST_INSERT_HEAD (&hw->cap_head, sc, entries);
#ifdef DEBUG_CAPTURE
@@ -514,8 +547,8 @@ static size_t audio_pcm_hw_find_min_in (HWVoiceIn *hw)
static size_t audio_pcm_hw_get_live_in(HWVoiceIn *hw)
{
size_t live = hw->total_samples_captured - audio_pcm_hw_find_min_in (hw);
if (audio_bug(__func__, live > hw->conv_buf.size)) {
dolog("live=%zu hw->conv_buf.size=%zu\n", live, hw->conv_buf.size);
if (audio_bug(__func__, live > hw->conv_buf->size)) {
dolog("live=%zu hw->conv_buf->size=%zu\n", live, hw->conv_buf->size);
return 0;
}
return live;
@@ -524,13 +557,13 @@ static size_t audio_pcm_hw_get_live_in(HWVoiceIn *hw)
static size_t audio_pcm_hw_conv_in(HWVoiceIn *hw, void *pcm_buf, size_t samples)
{
size_t conv = 0;
STSampleBuffer *conv_buf = &hw->conv_buf;
STSampleBuffer *conv_buf = hw->conv_buf;
while (samples) {
uint8_t *src = advance(pcm_buf, conv * hw->info.bytes_per_frame);
size_t proc = MIN(samples, conv_buf->size - conv_buf->pos);
hw->conv(conv_buf->buffer + conv_buf->pos, src, proc);
hw->conv(conv_buf->samples + conv_buf->pos, src, proc);
conv_buf->pos = (conv_buf->pos + proc) % conv_buf->size;
samples -= proc;
conv += proc;
@@ -542,65 +575,56 @@ static size_t audio_pcm_hw_conv_in(HWVoiceIn *hw, void *pcm_buf, size_t samples)
/*
* Soft voice (capture)
*/
static void audio_pcm_sw_resample_in(SWVoiceIn *sw,
size_t frames_in_max, size_t frames_out_max,
size_t *total_in, size_t *total_out)
static size_t audio_pcm_sw_read(SWVoiceIn *sw, void *buf, size_t size)
{
HWVoiceIn *hw = sw->hw;
struct st_sample *src, *dst;
size_t live, rpos, frames_in, frames_out;
live = hw->total_samples_captured - sw->total_hw_samples_acquired;
rpos = audio_ring_posb(hw->conv_buf.pos, live, hw->conv_buf.size);
/* resample conv_buf from rpos to end of buffer */
src = hw->conv_buf.buffer + rpos;
frames_in = MIN(frames_in_max, hw->conv_buf.size - rpos);
dst = sw->resample_buf.buffer;
frames_out = frames_out_max;
st_rate_flow(sw->rate, src, dst, &frames_in, &frames_out);
rpos += frames_in;
*total_in = frames_in;
*total_out = frames_out;
/* resample conv_buf from start of buffer if there are input frames left */
if (frames_in_max - frames_in && rpos == hw->conv_buf.size) {
src = hw->conv_buf.buffer;
frames_in = frames_in_max - frames_in;
dst += frames_out;
frames_out = frames_out_max - frames_out;
st_rate_flow(sw->rate, src, dst, &frames_in, &frames_out);
*total_in += frames_in;
*total_out += frames_out;
}
}
static size_t audio_pcm_sw_read(SWVoiceIn *sw, void *buf, size_t buf_len)
{
HWVoiceIn *hw = sw->hw;
size_t live, frames_out_max, total_in, total_out;
size_t samples, live, ret = 0, swlim, isamp, osamp, rpos, total = 0;
struct st_sample *src, *dst = sw->buf;
live = hw->total_samples_captured - sw->total_hw_samples_acquired;
if (!live) {
return 0;
}
if (audio_bug(__func__, live > hw->conv_buf.size)) {
dolog("live_in=%zu hw->conv_buf.size=%zu\n", live, hw->conv_buf.size);
if (audio_bug(__func__, live > hw->conv_buf->size)) {
dolog("live_in=%zu hw->conv_buf->size=%zu\n", live, hw->conv_buf->size);
return 0;
}
frames_out_max = MIN(buf_len / sw->info.bytes_per_frame,
sw->resample_buf.size);
rpos = audio_ring_posb(hw->conv_buf->pos, live, hw->conv_buf->size);
audio_pcm_sw_resample_in(sw, live, frames_out_max, &total_in, &total_out);
samples = size / sw->info.bytes_per_frame;
swlim = (live * sw->ratio) >> 32;
swlim = MIN (swlim, samples);
while (swlim) {
src = hw->conv_buf->samples + rpos;
if (hw->conv_buf->pos > rpos) {
isamp = hw->conv_buf->pos - rpos;
} else {
isamp = hw->conv_buf->size - rpos;
}
if (!isamp) {
break;
}
osamp = swlim;
st_rate_flow (sw->rate, src, dst, &isamp, &osamp);
swlim -= osamp;
rpos = (rpos + isamp) % hw->conv_buf->size;
dst += osamp;
ret += osamp;
total += isamp;
}
if (!hw->pcm_ops->volume_in) {
mixeng_volume(sw->resample_buf.buffer, total_out, &sw->vol);
mixeng_volume (sw->buf, ret, &sw->vol);
}
sw->clip(buf, sw->resample_buf.buffer, total_out);
sw->total_hw_samples_acquired += total_in;
return total_out * sw->info.bytes_per_frame;
sw->clip (buf, sw->buf, ret);
sw->total_hw_samples_acquired += total;
return ret * sw->info.bytes_per_frame;
}
/*
@@ -636,8 +660,8 @@ static size_t audio_pcm_hw_get_live_out (HWVoiceOut *hw, int *nb_live)
if (nb_live1) {
size_t live = smin;
if (audio_bug(__func__, live > hw->mix_buf.size)) {
dolog("live=%zu hw->mix_buf.size=%zu\n", live, hw->mix_buf.size);
if (audio_bug(__func__, live > hw->mix_buf->size)) {
dolog("live=%zu hw->mix_buf->size=%zu\n", live, hw->mix_buf->size);
return 0;
}
return live;
@@ -654,17 +678,17 @@ static size_t audio_pcm_hw_get_free(HWVoiceOut *hw)
static void audio_pcm_hw_clip_out(HWVoiceOut *hw, void *pcm_buf, size_t len)
{
size_t clipped = 0;
size_t pos = hw->mix_buf.pos;
size_t pos = hw->mix_buf->pos;
while (len) {
st_sample *src = hw->mix_buf.buffer + pos;
st_sample *src = hw->mix_buf->samples + pos;
uint8_t *dst = advance(pcm_buf, clipped * hw->info.bytes_per_frame);
size_t samples_till_end_of_buf = hw->mix_buf.size - pos;
size_t samples_till_end_of_buf = hw->mix_buf->size - pos;
size_t samples_to_clip = MIN(len, samples_till_end_of_buf);
hw->clip(dst, src, samples_to_clip);
pos = (pos + samples_to_clip) % hw->mix_buf.size;
pos = (pos + samples_to_clip) % hw->mix_buf->size;
len -= samples_to_clip;
clipped += samples_to_clip;
}
@@ -673,113 +697,84 @@ static void audio_pcm_hw_clip_out(HWVoiceOut *hw, void *pcm_buf, size_t len)
/*
* Soft voice (playback)
*/
static void audio_pcm_sw_resample_out(SWVoiceOut *sw,
size_t frames_in_max, size_t frames_out_max,
size_t *total_in, size_t *total_out)
static size_t audio_pcm_sw_write(SWVoiceOut *sw, void *buf, size_t size)
{
HWVoiceOut *hw = sw->hw;
struct st_sample *src, *dst;
size_t live, wpos, frames_in, frames_out;
size_t hwsamples, samples, isamp, osamp, wpos, live, dead, left, blck;
size_t hw_free;
size_t ret = 0, pos = 0, total = 0;
live = sw->total_hw_samples_mixed;
wpos = (hw->mix_buf.pos + live) % hw->mix_buf.size;
/* write to mix_buf from wpos to end of buffer */
src = sw->resample_buf.buffer;
frames_in = frames_in_max;
dst = hw->mix_buf.buffer + wpos;
frames_out = MIN(frames_out_max, hw->mix_buf.size - wpos);
st_rate_flow_mix(sw->rate, src, dst, &frames_in, &frames_out);
wpos += frames_out;
*total_in = frames_in;
*total_out = frames_out;
/* write to mix_buf from start of buffer if there are input frames left */
if (frames_in_max - frames_in > 0 && wpos == hw->mix_buf.size) {
src += frames_in;
frames_in = frames_in_max - frames_in;
dst = hw->mix_buf.buffer;
frames_out = frames_out_max - frames_out;
st_rate_flow_mix(sw->rate, src, dst, &frames_in, &frames_out);
*total_in += frames_in;
*total_out += frames_out;
if (!sw) {
return size;
}
}
static size_t audio_pcm_sw_write(SWVoiceOut *sw, void *buf, size_t buf_len)
{
HWVoiceOut *hw = sw->hw;
size_t live, dead, hw_free, sw_max, fe_max;
size_t frames_in_max, frames_out_max, total_in, total_out;
hwsamples = sw->hw->mix_buf->size;
live = sw->total_hw_samples_mixed;
if (audio_bug(__func__, live > hw->mix_buf.size)) {
dolog("live=%zu hw->mix_buf.size=%zu\n", live, hw->mix_buf.size);
if (audio_bug(__func__, live > hwsamples)) {
dolog("live=%zu hw->mix_buf->size=%zu\n", live, hwsamples);
return 0;
}
if (live == hw->mix_buf.size) {
if (live == hwsamples) {
#ifdef DEBUG_OUT
dolog ("%s is full %zu\n", sw->name, live);
#endif
return 0;
}
dead = hw->mix_buf.size - live;
hw_free = audio_pcm_hw_get_free(hw);
wpos = (sw->hw->mix_buf->pos + live) % hwsamples;
dead = hwsamples - live;
hw_free = audio_pcm_hw_get_free(sw->hw);
hw_free = hw_free > live ? hw_free - live : 0;
frames_out_max = MIN(dead, hw_free);
sw_max = st_rate_frames_in(sw->rate, frames_out_max);
fe_max = MIN(buf_len / sw->info.bytes_per_frame + sw->resample_buf.pos,
sw->resample_buf.size);
frames_in_max = MIN(sw_max, fe_max);
samples = ((int64_t)MIN(dead, hw_free) << 32) / sw->ratio;
samples = MIN(samples, size / sw->info.bytes_per_frame);
if (samples) {
sw->conv(sw->buf, buf, samples);
if (!frames_in_max) {
return 0;
}
if (frames_in_max > sw->resample_buf.pos) {
sw->conv(sw->resample_buf.buffer + sw->resample_buf.pos,
buf, frames_in_max - sw->resample_buf.pos);
if (!sw->hw->pcm_ops->volume_out) {
mixeng_volume(sw->resample_buf.buffer + sw->resample_buf.pos,
frames_in_max - sw->resample_buf.pos, &sw->vol);
mixeng_volume(sw->buf, samples, &sw->vol);
}
}
audio_pcm_sw_resample_out(sw, frames_in_max, frames_out_max,
&total_in, &total_out);
sw->total_hw_samples_mixed += total_out;
sw->empty = sw->total_hw_samples_mixed == 0;
/*
* Upsampling may leave one audio frame in the resample buffer. Decrement
* total_in by one if there was a leftover frame from the previous resample
* pass in the resample buffer. Increment total_in by one if the current
* resample pass left one frame in the resample buffer.
*/
if (frames_in_max - total_in == 1) {
/* copy one leftover audio frame to the beginning of the buffer */
*sw->resample_buf.buffer = *(sw->resample_buf.buffer + total_in);
total_in += 1 - sw->resample_buf.pos;
sw->resample_buf.pos = 1;
} else if (total_in >= sw->resample_buf.pos) {
total_in -= sw->resample_buf.pos;
sw->resample_buf.pos = 0;
while (samples) {
dead = hwsamples - live;
left = hwsamples - wpos;
blck = MIN (dead, left);
if (!blck) {
break;
}
isamp = samples;
osamp = blck;
st_rate_flow_mix (
sw->rate,
sw->buf + pos,
sw->hw->mix_buf->samples + wpos,
&isamp,
&osamp
);
ret += isamp;
samples -= isamp;
pos += isamp;
live += osamp;
wpos = (wpos + osamp) % hwsamples;
total += osamp;
}
sw->total_hw_samples_mixed += total;
sw->empty = sw->total_hw_samples_mixed == 0;
#ifdef DEBUG_OUT
dolog (
"%s: write size %zu written %zu total mixed %zu\n",
SW_NAME(sw),
buf_len / sw->info.bytes_per_frame,
total_in,
"%s: write size %zu ret %zu total sw %zu\n",
SW_NAME (sw),
size / sw->info.bytes_per_frame,
ret,
sw->total_hw_samples_mixed
);
#endif
return total_in * sw->info.bytes_per_frame;
return ret * sw->info.bytes_per_frame;
}
#ifdef DEBUG_AUDIO
@@ -997,6 +992,18 @@ void AUD_set_active_in (SWVoiceIn *sw, int on)
}
}
/**
* audio_frontend_frames_in() - returns the number of frames the resampling
* code generates from frames_in frames
*
* @sw: audio recording frontend
* @frames_in: number of frames
*/
static size_t audio_frontend_frames_in(SWVoiceIn *sw, size_t frames_in)
{
return (int64_t)frames_in * sw->ratio >> 32;
}
static size_t audio_get_avail (SWVoiceIn *sw)
{
size_t live;
@@ -1006,21 +1013,33 @@ static size_t audio_get_avail (SWVoiceIn *sw)
}
live = sw->hw->total_samples_captured - sw->total_hw_samples_acquired;
if (audio_bug(__func__, live > sw->hw->conv_buf.size)) {
dolog("live=%zu sw->hw->conv_buf.size=%zu\n", live,
sw->hw->conv_buf.size);
if (audio_bug(__func__, live > sw->hw->conv_buf->size)) {
dolog("live=%zu sw->hw->conv_buf->size=%zu\n", live,
sw->hw->conv_buf->size);
return 0;
}
ldebug (
"%s: get_avail live %zu frontend frames %u\n",
"%s: get_avail live %zu frontend frames %zu\n",
SW_NAME (sw),
live, st_rate_frames_out(sw->rate, live)
live, audio_frontend_frames_in(sw, live)
);
return live;
}
/**
* audio_frontend_frames_out() - returns the number of frames needed to
* get frames_out frames after resampling
*
* @sw: audio playback frontend
* @frames_out: number of frames
*/
static size_t audio_frontend_frames_out(SWVoiceOut *sw, size_t frames_out)
{
return ((int64_t)frames_out << 32) / sw->ratio;
}
static size_t audio_get_free(SWVoiceOut *sw)
{
size_t live, dead;
@@ -1031,17 +1050,17 @@ static size_t audio_get_free(SWVoiceOut *sw)
live = sw->total_hw_samples_mixed;
if (audio_bug(__func__, live > sw->hw->mix_buf.size)) {
dolog("live=%zu sw->hw->mix_buf.size=%zu\n", live,
sw->hw->mix_buf.size);
if (audio_bug(__func__, live > sw->hw->mix_buf->size)) {
dolog("live=%zu sw->hw->mix_buf->size=%zu\n", live,
sw->hw->mix_buf->size);
return 0;
}
dead = sw->hw->mix_buf.size - live;
dead = sw->hw->mix_buf->size - live;
#ifdef DEBUG_OUT
dolog("%s: get_free live %zu dead %zu frontend frames %u\n",
SW_NAME(sw), live, dead, st_rate_frames_in(sw->rate, dead));
dolog("%s: get_free live %zu dead %zu frontend frames %zu\n",
SW_NAME(sw), live, dead, audio_frontend_frames_out(sw, dead));
#endif
return dead;
@@ -1057,40 +1076,32 @@ static void audio_capture_mix_and_clear(HWVoiceOut *hw, size_t rpos,
for (sc = hw->cap_head.lh_first; sc; sc = sc->entries.le_next) {
SWVoiceOut *sw = &sc->sw;
size_t rpos2 = rpos;
int rpos2 = rpos;
n = samples;
while (n) {
size_t till_end_of_hw = hw->mix_buf.size - rpos2;
size_t to_read = MIN(till_end_of_hw, n);
size_t live, frames_in, frames_out;
size_t till_end_of_hw = hw->mix_buf->size - rpos2;
size_t to_write = MIN(till_end_of_hw, n);
size_t bytes = to_write * hw->info.bytes_per_frame;
size_t written;
sw->resample_buf.buffer = hw->mix_buf.buffer + rpos2;
sw->resample_buf.size = to_read;
live = sw->total_hw_samples_mixed;
audio_pcm_sw_resample_out(sw,
to_read, sw->hw->mix_buf.size - live,
&frames_in, &frames_out);
sw->total_hw_samples_mixed += frames_out;
sw->empty = sw->total_hw_samples_mixed == 0;
if (to_read - frames_in) {
dolog("Could not mix %zu frames into a capture "
sw->buf = hw->mix_buf->samples + rpos2;
written = audio_pcm_sw_write (sw, NULL, bytes);
if (written - bytes) {
dolog("Could not mix %zu bytes into a capture "
"buffer, mixed %zu\n",
to_read, frames_in);
bytes, written);
break;
}
n -= to_read;
rpos2 = (rpos2 + to_read) % hw->mix_buf.size;
n -= to_write;
rpos2 = (rpos2 + to_write) % hw->mix_buf->size;
}
}
}
n = MIN(samples, hw->mix_buf.size - rpos);
mixeng_clear(hw->mix_buf.buffer + rpos, n);
mixeng_clear(hw->mix_buf.buffer, samples - n);
n = MIN(samples, hw->mix_buf->size - rpos);
mixeng_clear(hw->mix_buf->samples + rpos, n);
mixeng_clear(hw->mix_buf->samples, samples - n);
}
static size_t audio_pcm_hw_run_out(HWVoiceOut *hw, size_t live)
@@ -1116,7 +1127,7 @@ static size_t audio_pcm_hw_run_out(HWVoiceOut *hw, size_t live)
live -= proc;
clipped += proc;
hw->mix_buf.pos = (hw->mix_buf.pos + proc) % hw->mix_buf.size;
hw->mix_buf->pos = (hw->mix_buf->pos + proc) % hw->mix_buf->size;
if (proc == 0 || proc < decr) {
break;
@@ -1170,14 +1181,12 @@ static void audio_run_out (AudioState *s)
size_t free;
if (hw_free > sw->total_hw_samples_mixed) {
free = st_rate_frames_in(sw->rate,
free = audio_frontend_frames_out(sw,
MIN(sw_free, hw_free - sw->total_hw_samples_mixed));
} else {
free = 0;
}
if (free > sw->resample_buf.pos) {
free = MIN(free, sw->resample_buf.size)
- sw->resample_buf.pos;
if (free > 0) {
sw->callback.fn(sw->callback.opaque,
free * sw->info.bytes_per_frame);
}
@@ -1189,8 +1198,8 @@ static void audio_run_out (AudioState *s)
live = 0;
}
if (audio_bug(__func__, live > hw->mix_buf.size)) {
dolog("live=%zu hw->mix_buf.size=%zu\n", live, hw->mix_buf.size);
if (audio_bug(__func__, live > hw->mix_buf->size)) {
dolog("live=%zu hw->mix_buf->size=%zu\n", live, hw->mix_buf->size);
continue;
}
@@ -1218,13 +1227,13 @@ static void audio_run_out (AudioState *s)
continue;
}
prev_rpos = hw->mix_buf.pos;
prev_rpos = hw->mix_buf->pos;
played = audio_pcm_hw_run_out(hw, live);
replay_audio_out(&played);
if (audio_bug(__func__, hw->mix_buf.pos >= hw->mix_buf.size)) {
dolog("hw->mix_buf.pos=%zu hw->mix_buf.size=%zu played=%zu\n",
hw->mix_buf.pos, hw->mix_buf.size, played);
hw->mix_buf.pos = 0;
if (audio_bug(__func__, hw->mix_buf->pos >= hw->mix_buf->size)) {
dolog("hw->mix_buf->pos=%zu hw->mix_buf->size=%zu played=%zu\n",
hw->mix_buf->pos, hw->mix_buf->size, played);
hw->mix_buf->pos = 0;
}
#ifdef DEBUG_OUT
@@ -1305,10 +1314,10 @@ static void audio_run_in (AudioState *s)
if (replay_mode != REPLAY_MODE_PLAY) {
captured = audio_pcm_hw_run_in(
hw, hw->conv_buf.size - audio_pcm_hw_get_live_in(hw));
hw, hw->conv_buf->size - audio_pcm_hw_get_live_in(hw));
}
replay_audio_in(&captured, hw->conv_buf.buffer, &hw->conv_buf.pos,
hw->conv_buf.size);
replay_audio_in(&captured, hw->conv_buf->samples, &hw->conv_buf->pos,
hw->conv_buf->size);
min = audio_pcm_hw_find_min_in (hw);
hw->total_samples_captured += captured - min;
@@ -1321,9 +1330,8 @@ static void audio_run_in (AudioState *s)
size_t sw_avail = audio_get_avail(sw);
size_t avail;
avail = st_rate_frames_out(sw->rate, sw_avail);
avail = audio_frontend_frames_in(sw, sw_avail);
if (avail > 0) {
avail = MIN(avail, sw->resample_buf.size);
sw->callback.fn(sw->callback.opaque,
avail * sw->info.bytes_per_frame);
}
@@ -1342,14 +1350,14 @@ static void audio_run_capture (AudioState *s)
SWVoiceOut *sw;
captured = live = audio_pcm_hw_get_live_out (hw, NULL);
rpos = hw->mix_buf.pos;
rpos = hw->mix_buf->pos;
while (live) {
size_t left = hw->mix_buf.size - rpos;
size_t left = hw->mix_buf->size - rpos;
size_t to_capture = MIN(live, left);
struct st_sample *src;
struct capture_callback *cb;
src = hw->mix_buf.buffer + rpos;
src = hw->mix_buf->samples + rpos;
hw->clip (cap->buf, src, to_capture);
mixeng_clear (src, to_capture);
@@ -1357,10 +1365,10 @@ static void audio_run_capture (AudioState *s)
cb->ops.capture (cb->opaque, cap->buf,
to_capture * hw->info.bytes_per_frame);
}
rpos = (rpos + to_capture) % hw->mix_buf.size;
rpos = (rpos + to_capture) % hw->mix_buf->size;
live -= to_capture;
}
hw->mix_buf.pos = rpos;
hw->mix_buf->pos = rpos;
for (sw = hw->sw_head.lh_first; sw; sw = sw->entries.le_next) {
if (!sw->active && sw->empty) {
@@ -1919,7 +1927,7 @@ CaptureVoiceOut *AUD_add_capture(
audio_pcm_init_info (&hw->info, as);
cap->buf = g_malloc0_n(hw->mix_buf.size, hw->info.bytes_per_frame);
cap->buf = g_malloc0_n(hw->mix_buf->size, hw->info.bytes_per_frame);
if (hw->info.is_float) {
hw->clip = mixeng_clip_float[hw->info.nchannels == 2];
@@ -1971,7 +1979,7 @@ void AUD_del_capture (CaptureVoiceOut *cap, void *cb_opaque)
sw = sw1;
}
QLIST_REMOVE (cap, entries);
g_free(cap->hw.mix_buf.buffer);
g_free (cap->hw.mix_buf);
g_free (cap->buf);
g_free (cap);
}

View File

@@ -58,7 +58,7 @@ typedef struct SWVoiceCap SWVoiceCap;
typedef struct STSampleBuffer {
size_t pos, size;
st_sample *buffer;
st_sample samples[];
} STSampleBuffer;
typedef struct HWVoiceOut {
@@ -71,7 +71,7 @@ typedef struct HWVoiceOut {
f_sample *clip;
uint64_t ts_helper;
STSampleBuffer mix_buf;
STSampleBuffer *mix_buf;
void *buf_emul;
size_t pos_emul, pending_emul, size_emul;
@@ -93,7 +93,7 @@ typedef struct HWVoiceIn {
size_t total_samples_captured;
uint64_t ts_helper;
STSampleBuffer conv_buf;
STSampleBuffer *conv_buf;
void *buf_emul;
size_t pos_emul, pending_emul, size_emul;
@@ -108,7 +108,8 @@ struct SWVoiceOut {
AudioState *s;
struct audio_pcm_info info;
t_sample *conv;
STSampleBuffer resample_buf;
int64_t ratio;
struct st_sample *buf;
void *rate;
size_t total_hw_samples_mixed;
int active;
@@ -125,9 +126,10 @@ struct SWVoiceIn {
AudioState *s;
int active;
struct audio_pcm_info info;
int64_t ratio;
void *rate;
size_t total_hw_samples_acquired;
STSampleBuffer resample_buf;
struct st_sample *buf;
f_sample *clip;
HWVoiceIn *hw;
char *name;
@@ -143,14 +145,14 @@ struct audio_driver {
void *(*init) (Audiodev *);
void (*fini) (void *);
#ifdef CONFIG_GIO
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager, bool p2p);
void (*set_dbus_server) (AudioState *s, GDBusObjectManagerServer *manager);
#endif
struct audio_pcm_ops *pcm_ops;
int can_be_default;
int max_voices_out;
int max_voices_in;
size_t voice_size_out;
size_t voice_size_in;
int voice_size_out;
int voice_size_in;
QLIST_ENTRY(audio_driver) next;
};
@@ -249,6 +251,7 @@ void audio_pcm_init_info (struct audio_pcm_info *info, struct audsettings *as);
void audio_pcm_info_clear_buf (struct audio_pcm_info *info, void *buf, int len);
int audio_bug (const char *funcname, int cond);
void *audio_calloc (const char *funcname, int nmemb, size_t size);
void audio_run(AudioState *s, const char *msg);
@@ -291,6 +294,9 @@ static inline size_t audio_ring_posb(size_t pos, size_t dist, size_t len)
#define ldebug(fmt, ...) (void)0
#endif
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)
typedef struct AudiodevListEntry {
Audiodev *dev;
QSIMPLEQ_ENTRY(AudiodevListEntry) next;

View File

@@ -40,7 +40,7 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
struct audio_driver *drv)
{
int max_voices = glue (drv->max_voices_, TYPE);
size_t voice_size = glue(drv->voice_size_, TYPE);
int voice_size = glue (drv->voice_size_, TYPE);
if (glue (s->nb_hw_voices_, TYPE) > max_voices) {
if (!max_voices) {
@@ -63,17 +63,16 @@ static void glue(audio_init_nb_voices_, TYPE)(AudioState *s,
}
if (audio_bug(__func__, voice_size && !max_voices)) {
dolog("drv=`%s' voice_size=%zu max_voices=0\n",
drv->name, voice_size);
dolog ("drv=`%s' voice_size=%d max_voices=0\n",
drv->name, voice_size);
}
}
static void glue (audio_pcm_hw_free_resources_, TYPE) (HW *hw)
{
g_free(hw->buf_emul);
g_free(HWBUF.buffer);
HWBUF.buffer = NULL;
HWBUF.size = 0;
g_free (HWBUF);
HWBUF = NULL;
}
static void glue(audio_pcm_hw_alloc_resources_, TYPE)(HW *hw)
@@ -84,67 +83,56 @@ static void glue(audio_pcm_hw_alloc_resources_, TYPE)(HW *hw)
dolog("Attempted to allocate empty buffer\n");
}
HWBUF.buffer = g_new0(st_sample, samples);
HWBUF.size = samples;
HWBUF.pos = 0;
HWBUF = g_malloc0(sizeof(STSampleBuffer) + sizeof(st_sample) * samples);
HWBUF->size = samples;
} else {
HWBUF.buffer = NULL;
HWBUF.size = 0;
HWBUF = NULL;
}
}
static void glue (audio_pcm_sw_free_resources_, TYPE) (SW *sw)
{
g_free(sw->resample_buf.buffer);
sw->resample_buf.buffer = NULL;
sw->resample_buf.size = 0;
g_free (sw->buf);
if (sw->rate) {
st_rate_stop (sw->rate);
}
sw->buf = NULL;
sw->rate = NULL;
}
static int glue (audio_pcm_sw_alloc_resources_, TYPE) (SW *sw)
{
HW *hw = sw->hw;
uint64_t samples;
int samples;
if (!glue(audio_get_pdo_, TYPE)(sw->s->dev)->mixing_engine) {
return 0;
}
samples = muldiv64(HWBUF.size, sw->info.freq, hw->info.freq);
if (samples == 0) {
uint64_t f_fe_min;
uint64_t f_be = (uint32_t)hw->info.freq;
#ifdef DAC
samples = ((int64_t) sw->HWBUF->size << 32) / sw->ratio;
#else
samples = (int64_t)sw->HWBUF->size * sw->ratio >> 32;
#endif
/* f_fe_min = ceil(1 [frames] * f_be [Hz] / size_be [frames]) */
f_fe_min = (f_be + HWBUF.size - 1) / HWBUF.size;
qemu_log_mask(LOG_UNIMP,
AUDIO_CAP ": The guest selected a " NAME " sample rate"
" of %d Hz for %s. Only sample rates >= %" PRIu64 " Hz"
" are supported.\n",
sw->info.freq, sw->name, f_fe_min);
sw->buf = audio_calloc(__func__, samples, sizeof(struct st_sample));
if (!sw->buf) {
dolog ("Could not allocate buffer for `%s' (%d samples)\n",
SW_NAME (sw), samples);
return -1;
}
/*
* Allocate one additional audio frame that is needed for upsampling
* if the resample buffer size is small. For large buffer sizes take
* care of overflows and truncation.
*/
samples = samples < SIZE_MAX ? samples + 1 : SIZE_MAX;
sw->resample_buf.buffer = g_new0(st_sample, samples);
sw->resample_buf.size = samples;
sw->resample_buf.pos = 0;
#ifdef DAC
sw->rate = st_rate_start(sw->info.freq, hw->info.freq);
sw->rate = st_rate_start (sw->info.freq, sw->hw->info.freq);
#else
sw->rate = st_rate_start(hw->info.freq, sw->info.freq);
sw->rate = st_rate_start (sw->hw->info.freq, sw->info.freq);
#endif
if (!sw->rate) {
g_free (sw->buf);
sw->buf = NULL;
return -1;
}
return 0;
}
@@ -161,8 +149,11 @@ static int glue (audio_pcm_sw_init_, TYPE) (
sw->hw = hw;
sw->active = 0;
#ifdef DAC
sw->ratio = ((int64_t) sw->hw->info.freq << 32) / sw->info.freq;
sw->total_hw_samples_mixed = 0;
sw->empty = 1;
#else
sw->ratio = ((int64_t) sw->info.freq << 32) / sw->hw->info.freq;
#endif
if (sw->info.is_float) {
@@ -273,11 +264,13 @@ static HW *glue(audio_pcm_hw_add_new_, TYPE)(AudioState *s,
return NULL;
}
/*
* Since glue(s->nb_hw_voices_, TYPE) is != 0, glue(drv->voice_size_, TYPE)
* is guaranteed to be != 0. See the audio_init_nb_voices_* functions.
*/
hw = g_malloc0(glue(drv->voice_size_, TYPE));
hw = audio_calloc(__func__, 1, glue(drv->voice_size_, TYPE));
if (!hw) {
dolog ("Can not allocate voice `%s' size %d\n",
drv->name, glue (drv->voice_size_, TYPE));
return NULL;
}
hw->s = s;
hw->pcm_ops = drv->pcm_ops;
@@ -425,28 +418,33 @@ static SW *glue(audio_pcm_create_voice_pair_, TYPE)(
hw_as = *as;
}
sw = g_new0(SW, 1);
sw = audio_calloc(__func__, 1, sizeof(*sw));
if (!sw) {
dolog ("Could not allocate soft voice `%s' (%zu bytes)\n",
sw_name ? sw_name : "unknown", sizeof (*sw));
goto err1;
}
sw->s = s;
hw = glue(audio_pcm_hw_add_, TYPE)(s, &hw_as);
if (!hw) {
dolog("Could not create a backend for voice `%s'\n", sw_name);
goto err1;
goto err2;
}
glue (audio_pcm_hw_add_sw_, TYPE) (hw, sw);
if (glue (audio_pcm_sw_init_, TYPE) (sw, hw, sw_name, as)) {
goto err2;
goto err3;
}
return sw;
err2:
err3:
glue (audio_pcm_hw_del_sw_, TYPE) (sw);
glue (audio_pcm_hw_gc_, TYPE) (&hw);
err2:
g_free (sw);
err1:
g_free(sw);
return NULL;
}
@@ -517,8 +515,8 @@ SW *glue (AUD_open_, TYPE) (
HW *hw = sw->hw;
if (!hw) {
dolog("Internal logic error: voice `%s' has no backend\n",
SW_NAME(sw));
dolog ("Internal logic error voice `%s' has no hardware store\n",
SW_NAME (sw));
goto fail;
}
@@ -529,6 +527,7 @@ SW *glue (AUD_open_, TYPE) (
} else {
sw = glue(audio_pcm_create_voice_pair_, TYPE)(s, name, as);
if (!sw) {
dolog ("Failed to create voice `%s'\n", name);
return NULL;
}
}

View File

@@ -43,7 +43,6 @@
typedef struct DBusAudio {
GDBusObjectManagerServer *server;
bool p2p;
GDBusObjectSkeleton *audio;
QemuDBusDisplay1Audio *iface;
GHashTable *out_listeners;
@@ -449,8 +448,7 @@ dbus_audio_register_listener(AudioState *s,
bool out)
{
DBusAudio *da = s->drv_opaque;
const char *sender =
da->p2p ? "p2p" : g_dbus_method_invocation_get_sender(invocation);
const char *sender = g_dbus_method_invocation_get_sender(invocation);
g_autoptr(GDBusConnection) listener_conn = NULL;
g_autoptr(GError) err = NULL;
g_autoptr(GSocket) socket = NULL;
@@ -593,7 +591,7 @@ dbus_audio_register_in_listener(AudioState *s,
}
static void
dbus_audio_set_server(AudioState *s, GDBusObjectManagerServer *server, bool p2p)
dbus_audio_set_server(AudioState *s, GDBusObjectManagerServer *server)
{
DBusAudio *da = s->drv_opaque;
@@ -601,7 +599,6 @@ dbus_audio_set_server(AudioState *s, GDBusObjectManagerServer *server, bool p2p)
g_assert(!da->server);
da->server = g_object_ref(server);
da->p2p = p2p;
da->audio = g_dbus_object_skeleton_new(DBUS_DISPLAY1_AUDIO_PATH);
da->iface = qemu_dbus_display1_audio_skeleton_new();

View File

@@ -414,7 +414,12 @@ struct rate {
*/
void *st_rate_start (int inrate, int outrate)
{
struct rate *rate = g_new0(struct rate, 1);
struct rate *rate = audio_calloc(__func__, 1, sizeof(*rate));
if (!rate) {
dolog ("Could not allocate resampler (%zu bytes)\n", sizeof (*rate));
return NULL;
}
rate->opos = 0;
@@ -440,86 +445,6 @@ void st_rate_stop (void *opaque)
g_free (opaque);
}
/**
* st_rate_frames_out() - returns the number of frames the resampling code
* generates from frames_in frames
*
* @opaque: pointer to struct rate
* @frames_in: number of frames
*
* When upsampling, there may be more than one correct result. In this case,
* the function returns the maximum number of output frames the resampling
* code can generate.
*/
uint32_t st_rate_frames_out(void *opaque, uint32_t frames_in)
{
struct rate *rate = opaque;
uint64_t opos_end, opos_delta;
uint32_t ipos_end;
uint32_t frames_out;
if (rate->opos_inc == 1ULL << 32) {
return frames_in;
}
/* no output frame without at least one input frame */
if (!frames_in) {
return 0;
}
/* last frame read was at rate->ipos - 1 */
ipos_end = rate->ipos - 1 + frames_in;
opos_end = (uint64_t)ipos_end << 32;
/* last frame written was at rate->opos - rate->opos_inc */
if (opos_end + rate->opos_inc <= rate->opos) {
return 0;
}
opos_delta = opos_end - rate->opos + rate->opos_inc;
frames_out = opos_delta / rate->opos_inc;
return opos_delta % rate->opos_inc ? frames_out : frames_out - 1;
}
/**
* st_rate_frames_in() - returns the number of frames needed to
* get frames_out frames after resampling
*
* @opaque: pointer to struct rate
* @frames_out: number of frames
*
* When downsampling, there may be more than one correct result. In this
* case, the function returns the maximum number of input frames needed.
*/
uint32_t st_rate_frames_in(void *opaque, uint32_t frames_out)
{
struct rate *rate = opaque;
uint64_t opos_start, opos_end;
uint32_t ipos_start, ipos_end;
if (rate->opos_inc == 1ULL << 32) {
return frames_out;
}
if (frames_out) {
opos_start = rate->opos;
ipos_start = rate->ipos;
} else {
uint64_t offset;
/* add offset = ceil(opos_inc) to opos and ipos to avoid an underflow */
offset = (rate->opos_inc + (1ULL << 32) - 1) & ~((1ULL << 32) - 1);
opos_start = rate->opos + offset;
ipos_start = rate->ipos + (offset >> 32);
}
/* last frame written was at opos_start - rate->opos_inc */
opos_end = opos_start - rate->opos_inc + rate->opos_inc * frames_out;
ipos_end = (opos_end >> 32) + 1;
/* last frame read was at ipos_start - 1 */
return ipos_end + 1 > ipos_start ? ipos_end + 1 - ipos_start : 0;
}
void mixeng_clear (struct st_sample *buf, int len)
{
memset (buf, 0, len * sizeof (struct st_sample));

View File

@@ -52,8 +52,6 @@ void st_rate_flow(void *opaque, st_sample *ibuf, st_sample *obuf,
void st_rate_flow_mix(void *opaque, st_sample *ibuf, st_sample *obuf,
size_t *isamp, size_t *osamp);
void st_rate_stop (void *opaque);
uint32_t st_rate_frames_out(void *opaque, uint32_t frames_in);
uint32_t st_rate_frames_in(void *opaque, uint32_t frames_out);
void mixeng_clear (struct st_sample *buf, int len);
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol);

View File

@@ -40,6 +40,8 @@ void NAME (void *opaque, struct st_sample *ibuf, struct st_sample *obuf,
int64_t t;
#endif
ilast = rate->ilast;
istart = ibuf;
iend = ibuf + *isamp;
@@ -57,17 +59,15 @@ void NAME (void *opaque, struct st_sample *ibuf, struct st_sample *obuf,
return;
}
/* without input samples, there's nothing to do */
if (ibuf >= iend) {
*osamp = 0;
return;
}
while (obuf < oend) {
ilast = rate->ilast;
while (true) {
/* Safety catch to make sure we have input samples. */
if (ibuf >= iend) {
break;
}
/* read as many input samples so that ipos > opos */
while (rate->ipos <= (rate->opos >> 32)) {
ilast = *ibuf++;
rate->ipos++;
@@ -78,11 +78,6 @@ void NAME (void *opaque, struct st_sample *ibuf, struct st_sample *obuf,
}
}
/* make sure that the next output sample can be written */
if (obuf >= oend) {
break;
}
icur = *ibuf;
/* wrap ipos and opos around long before they overflow */

View File

@@ -59,19 +59,6 @@ struct CryptoDevBackendBuiltin {
CryptoDevBackendBuiltinSession *sessions[MAX_NUM_SESSIONS];
};
static void cryptodev_builtin_init_akcipher(CryptoDevBackend *backend)
{
QCryptoAkCipherOptions opts;
opts.alg = QCRYPTO_AKCIPHER_ALG_RSA;
opts.u.rsa.padding_alg = QCRYPTO_RSA_PADDING_ALG_RAW;
if (qcrypto_akcipher_supports(&opts)) {
backend->conf.crypto_services |=
(1u << QCRYPTODEV_BACKEND_SERVICE_AKCIPHER);
backend->conf.akcipher_algo = 1u << VIRTIO_CRYPTO_AKCIPHER_RSA;
}
}
static void cryptodev_builtin_init(
CryptoDevBackend *backend, Error **errp)
{
@@ -85,18 +72,21 @@ static void cryptodev_builtin_init(
return;
}
cc = cryptodev_backend_new_client();
cc = cryptodev_backend_new_client(
"cryptodev-builtin", NULL);
cc->info_str = g_strdup_printf("cryptodev-builtin0");
cc->queue_index = 0;
cc->type = QCRYPTODEV_BACKEND_TYPE_BUILTIN;
cc->type = CRYPTODEV_BACKEND_TYPE_BUILTIN;
backend->conf.peers.ccs[0] = cc;
backend->conf.crypto_services =
1u << QCRYPTODEV_BACKEND_SERVICE_CIPHER |
1u << QCRYPTODEV_BACKEND_SERVICE_HASH |
1u << QCRYPTODEV_BACKEND_SERVICE_MAC;
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC |
1u << VIRTIO_CRYPTO_SERVICE_AKCIPHER;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;
backend->conf.akcipher_algo = 1u << VIRTIO_CRYPTO_AKCIPHER_RSA;
/*
* Set the Maximum length of crypto request.
* Why this value? Just avoid to overflow when
@@ -105,7 +95,6 @@ static void cryptodev_builtin_init(
backend->conf.max_size = LONG_MAX - sizeof(CryptoDevBackendOpInfo);
backend->conf.max_cipher_key_len = CRYPTODEV_BUITLIN_MAX_CIPHER_KEY_LEN;
backend->conf.max_auth_key_len = CRYPTODEV_BUITLIN_MAX_AUTH_KEY_LEN;
cryptodev_builtin_init_akcipher(backend);
cryptodev_backend_set_ready(backend, true);
}
@@ -539,14 +528,17 @@ static int cryptodev_builtin_asym_operation(
static int cryptodev_builtin_operation(
CryptoDevBackend *backend,
CryptoDevBackendOpInfo *op_info)
CryptoDevBackendOpInfo *op_info,
uint32_t queue_index,
CryptoDevCompletionFunc cb,
void *opaque)
{
CryptoDevBackendBuiltin *builtin =
CRYPTODEV_BACKEND_BUILTIN(backend);
CryptoDevBackendBuiltinSession *sess;
CryptoDevBackendSymOpInfo *sym_op_info;
CryptoDevBackendAsymOpInfo *asym_op_info;
QCryptodevBackendAlgType algtype = op_info->algtype;
enum CryptoDevBackendAlgType algtype = op_info->algtype;
int status = -VIRTIO_CRYPTO_ERR;
Error *local_error = NULL;
@@ -558,11 +550,11 @@ static int cryptodev_builtin_operation(
}
sess = builtin->sessions[op_info->session_id];
if (algtype == QCRYPTODEV_BACKEND_ALG_SYM) {
if (algtype == CRYPTODEV_BACKEND_ALG_SYM) {
sym_op_info = op_info->u.sym_op_info;
status = cryptodev_builtin_sym_operation(sess, sym_op_info,
&local_error);
} else if (algtype == QCRYPTODEV_BACKEND_ALG_ASYM) {
} else if (algtype == CRYPTODEV_BACKEND_ALG_ASYM) {
asym_op_info = op_info->u.asym_op_info;
status = cryptodev_builtin_asym_operation(sess, op_info->op_code,
asym_op_info, &local_error);
@@ -571,8 +563,8 @@ static int cryptodev_builtin_operation(
if (local_error) {
error_report_err(local_error);
}
if (op_info->cb) {
op_info->cb(op_info->opaque, status);
if (cb) {
cb(opaque, status);
}
return 0;
}

View File

@@ -1,54 +0,0 @@
/*
* HMP commands related to cryptodev
*
* Copyright (c) 2023 Bytedance.Inc
*
* Authors:
* zhenwei pi<pizhenwei@bytedance.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or
* (at your option) any later version.
*/
#include "qemu/osdep.h"
#include "monitor/hmp.h"
#include "monitor/monitor.h"
#include "qapi/qapi-commands-cryptodev.h"
#include "qapi/qmp/qdict.h"
void hmp_info_cryptodev(Monitor *mon, const QDict *qdict)
{
QCryptodevInfoList *il;
QCryptodevBackendServiceTypeList *sl;
QCryptodevBackendClientList *cl;
for (il = qmp_query_cryptodev(NULL); il; il = il->next) {
g_autofree char *services = NULL;
QCryptodevInfo *info = il->value;
char *tmp_services;
/* build a string like 'service=[akcipher|mac|hash|cipher]' */
for (sl = info->service; sl; sl = sl->next) {
const char *service = QCryptodevBackendServiceType_str(sl->value);
if (!services) {
services = g_strdup(service);
} else {
tmp_services = g_strjoin("|", services, service, NULL);
g_free(services);
services = tmp_services;
}
}
monitor_printf(mon, "%s: service=[%s]\n", info->id, services);
for (cl = info->client; cl; cl = cl->next) {
QCryptodevBackendClient *client = cl->value;
monitor_printf(mon, " queue %" PRIu32 ": type=%s\n",
client->queue,
QCryptodevBackendType_str(client->type));
}
}
qapi_free_QCryptodevInfoList(il);
}

View File

@@ -223,14 +223,14 @@ static void cryptodev_lkcf_init(CryptoDevBackend *backend, Error **errp)
return;
}
cc = cryptodev_backend_new_client();
cc = cryptodev_backend_new_client("cryptodev-lkcf", NULL);
cc->info_str = g_strdup_printf("cryptodev-lkcf0");
cc->queue_index = 0;
cc->type = QCRYPTODEV_BACKEND_TYPE_LKCF;
cc->type = CRYPTODEV_BACKEND_TYPE_LKCF;
backend->conf.peers.ccs[0] = cc;
backend->conf.crypto_services =
1u << QCRYPTODEV_BACKEND_SERVICE_AKCIPHER;
1u << VIRTIO_CRYPTO_SERVICE_AKCIPHER;
backend->conf.akcipher_algo = 1u << VIRTIO_CRYPTO_AKCIPHER_RSA;
lkcf->running = true;
@@ -469,12 +469,15 @@ static void *cryptodev_lkcf_worker(void *arg)
static int cryptodev_lkcf_operation(
CryptoDevBackend *backend,
CryptoDevBackendOpInfo *op_info)
CryptoDevBackendOpInfo *op_info,
uint32_t queue_index,
CryptoDevCompletionFunc cb,
void *opaque)
{
CryptoDevBackendLKCF *lkcf =
CRYPTODEV_BACKEND_LKCF(backend);
CryptoDevBackendLKCFSession *sess;
QCryptodevBackendAlgType algtype = op_info->algtype;
enum CryptoDevBackendAlgType algtype = op_info->algtype;
CryptoDevLKCFTask *task;
if (op_info->session_id >= MAX_SESSIONS ||
@@ -485,15 +488,15 @@ static int cryptodev_lkcf_operation(
}
sess = lkcf->sess[op_info->session_id];
if (algtype != QCRYPTODEV_BACKEND_ALG_ASYM) {
if (algtype != CRYPTODEV_BACKEND_ALG_ASYM) {
error_report("algtype not supported: %u", algtype);
return -VIRTIO_CRYPTO_NOTSUPP;
}
task = g_new0(CryptoDevLKCFTask, 1);
task->op_info = op_info;
task->cb = op_info->cb;
task->opaque = op_info->opaque;
task->cb = cb;
task->opaque = opaque;
task->sess = sess;
task->lkcf = lkcf;
task->status = -VIRTIO_CRYPTO_ERR;

View File

@@ -67,7 +67,7 @@ cryptodev_vhost_user_get_vhost(
{
CryptoDevBackendVhostUser *s =
CRYPTODEV_BACKEND_VHOST_USER(b);
assert(cc->type == QCRYPTODEV_BACKEND_TYPE_VHOST_USER);
assert(cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER);
assert(queue < MAX_CRYPTO_QUEUE_NUM);
return s->vhost_crypto[queue];
@@ -198,11 +198,12 @@ static void cryptodev_vhost_user_init(
s->opened = true;
for (i = 0; i < queues; i++) {
cc = cryptodev_backend_new_client();
cc = cryptodev_backend_new_client(
"cryptodev-vhost-user", NULL);
cc->info_str = g_strdup_printf("cryptodev-vhost-user%zu to %s ",
i, chr->label);
cc->queue_index = i;
cc->type = QCRYPTODEV_BACKEND_TYPE_VHOST_USER;
cc->type = CRYPTODEV_BACKEND_TYPE_VHOST_USER;
backend->conf.peers.ccs[i] = cc;
@@ -221,9 +222,9 @@ static void cryptodev_vhost_user_init(
cryptodev_vhost_user_event, NULL, s, NULL, true);
backend->conf.crypto_services =
1u << QCRYPTODEV_BACKEND_SERVICE_CIPHER |
1u << QCRYPTODEV_BACKEND_SERVICE_HASH |
1u << QCRYPTODEV_BACKEND_SERVICE_MAC;
1u << VIRTIO_CRYPTO_SERVICE_CIPHER |
1u << VIRTIO_CRYPTO_SERVICE_HASH |
1u << VIRTIO_CRYPTO_SERVICE_MAC;
backend->conf.cipher_algo_l = 1u << VIRTIO_CRYPTO_CIPHER_AES_CBC;
backend->conf.hash_algo = 1u << VIRTIO_CRYPTO_HASH_SHA1;

View File

@@ -127,7 +127,7 @@ cryptodev_get_vhost(CryptoDevBackendClient *cc,
switch (cc->type) {
#if defined(CONFIG_VHOST_USER) && defined(CONFIG_LINUX)
case QCRYPTODEV_BACKEND_TYPE_VHOST_USER:
case CRYPTODEV_BACKEND_TYPE_VHOST_USER:
vhost_crypto = cryptodev_vhost_user_get_vhost(cc, b, queue);
break;
#endif
@@ -195,7 +195,7 @@ int cryptodev_vhost_start(VirtIODevice *dev, int total_queues)
* because vhost user doesn't interrupt masking/unmasking
* properly.
*/
if (cc->type == QCRYPTODEV_BACKEND_TYPE_VHOST_USER) {
if (cc->type == CRYPTODEV_BACKEND_TYPE_VHOST_USER) {
dev->use_guest_notifier_mask = false;
}
}

View File

@@ -23,92 +23,29 @@
#include "qemu/osdep.h"
#include "sysemu/cryptodev.h"
#include "sysemu/stats.h"
#include "qapi/error.h"
#include "qapi/qapi-commands-cryptodev.h"
#include "qapi/qapi-types-stats.h"
#include "qapi/visitor.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "qom/object_interfaces.h"
#include "hw/virtio/virtio-crypto.h"
#define SYM_ENCRYPT_OPS_STR "sym-encrypt-ops"
#define SYM_DECRYPT_OPS_STR "sym-decrypt-ops"
#define SYM_ENCRYPT_BYTES_STR "sym-encrypt-bytes"
#define SYM_DECRYPT_BYTES_STR "sym-decrypt-bytes"
#define ASYM_ENCRYPT_OPS_STR "asym-encrypt-ops"
#define ASYM_DECRYPT_OPS_STR "asym-decrypt-ops"
#define ASYM_SIGN_OPS_STR "asym-sign-ops"
#define ASYM_VERIFY_OPS_STR "asym-verify-ops"
#define ASYM_ENCRYPT_BYTES_STR "asym-encrypt-bytes"
#define ASYM_DECRYPT_BYTES_STR "asym-decrypt-bytes"
#define ASYM_SIGN_BYTES_STR "asym-sign-bytes"
#define ASYM_VERIFY_BYTES_STR "asym-verify-bytes"
typedef struct StatsArgs {
union StatsResultsType {
StatsResultList **stats;
StatsSchemaList **schema;
} result;
strList *names;
Error **errp;
} StatsArgs;
static QTAILQ_HEAD(, CryptoDevBackendClient) crypto_clients;
static int qmp_query_cryptodev_foreach(Object *obj, void *data)
{
CryptoDevBackend *backend;
QCryptodevInfoList **infolist = data;
uint32_t services, i;
if (!object_dynamic_cast(obj, TYPE_CRYPTODEV_BACKEND)) {
return 0;
}
QCryptodevInfo *info = g_new0(QCryptodevInfo, 1);
info->id = g_strdup(object_get_canonical_path_component(obj));
backend = CRYPTODEV_BACKEND(obj);
services = backend->conf.crypto_services;
for (i = 0; i < QCRYPTODEV_BACKEND_SERVICE__MAX; i++) {
if (services & (1 << i)) {
QAPI_LIST_PREPEND(info->service, i);
}
}
for (i = 0; i < backend->conf.peers.queues; i++) {
CryptoDevBackendClient *cc = backend->conf.peers.ccs[i];
QCryptodevBackendClient *client = g_new0(QCryptodevBackendClient, 1);
client->queue = cc->queue_index;
client->type = cc->type;
QAPI_LIST_PREPEND(info->client, client);
}
QAPI_LIST_PREPEND(*infolist, info);
return 0;
}
QCryptodevInfoList *qmp_query_cryptodev(Error **errp)
{
QCryptodevInfoList *list = NULL;
Object *objs = container_get(object_get_root(), "/objects");
object_child_foreach(objs, qmp_query_cryptodev_foreach, &list);
return list;
}
CryptoDevBackendClient *cryptodev_backend_new_client(void)
CryptoDevBackendClient *
cryptodev_backend_new_client(const char *model,
const char *name)
{
CryptoDevBackendClient *cc;
cc = g_new0(CryptoDevBackendClient, 1);
cc->model = g_strdup(model);
if (name) {
cc->name = g_strdup(name);
}
QTAILQ_INSERT_TAIL(&crypto_clients, cc, next);
return cc;
@@ -118,6 +55,8 @@ void cryptodev_backend_free_client(
CryptoDevBackendClient *cc)
{
QTAILQ_REMOVE(&crypto_clients, cc, next);
g_free(cc->name);
g_free(cc->model);
g_free(cc->info_str);
g_free(cc);
}
@@ -132,9 +71,6 @@ void cryptodev_backend_cleanup(
if (bc->cleanup) {
bc->cleanup(backend, errp);
}
g_free(backend->sym_stat);
g_free(backend->asym_stat);
}
int cryptodev_backend_create_session(
@@ -171,111 +107,38 @@ int cryptodev_backend_close_session(
static int cryptodev_backend_operation(
CryptoDevBackend *backend,
CryptoDevBackendOpInfo *op_info)
CryptoDevBackendOpInfo *op_info,
uint32_t queue_index,
CryptoDevCompletionFunc cb,
void *opaque)
{
CryptoDevBackendClass *bc =
CRYPTODEV_BACKEND_GET_CLASS(backend);
if (bc->do_op) {
return bc->do_op(backend, op_info);
return bc->do_op(backend, op_info, queue_index, cb, opaque);
}
return -VIRTIO_CRYPTO_NOTSUPP;
}
static int cryptodev_backend_account(CryptoDevBackend *backend,
CryptoDevBackendOpInfo *op_info)
int cryptodev_backend_crypto_operation(
CryptoDevBackend *backend,
void *opaque1,
uint32_t queue_index,
CryptoDevCompletionFunc cb, void *opaque2)
{
enum QCryptodevBackendAlgType algtype = op_info->algtype;
int len;
VirtIOCryptoReq *req = opaque1;
CryptoDevBackendOpInfo *op_info = &req->op_info;
enum CryptoDevBackendAlgType algtype = req->flags;
if (algtype == QCRYPTODEV_BACKEND_ALG_ASYM) {
CryptoDevBackendAsymOpInfo *asym_op_info = op_info->u.asym_op_info;
len = asym_op_info->src_len;
switch (op_info->op_code) {
case VIRTIO_CRYPTO_AKCIPHER_ENCRYPT:
CryptodevAsymStatIncEncrypt(backend, len);
break;
case VIRTIO_CRYPTO_AKCIPHER_DECRYPT:
CryptodevAsymStatIncDecrypt(backend, len);
break;
case VIRTIO_CRYPTO_AKCIPHER_SIGN:
CryptodevAsymStatIncSign(backend, len);
break;
case VIRTIO_CRYPTO_AKCIPHER_VERIFY:
CryptodevAsymStatIncVerify(backend, len);
break;
default:
return -VIRTIO_CRYPTO_NOTSUPP;
}
} else if (algtype == QCRYPTODEV_BACKEND_ALG_SYM) {
CryptoDevBackendSymOpInfo *sym_op_info = op_info->u.sym_op_info;
len = sym_op_info->src_len;
switch (op_info->op_code) {
case VIRTIO_CRYPTO_CIPHER_ENCRYPT:
CryptodevSymStatIncEncrypt(backend, len);
break;
case VIRTIO_CRYPTO_CIPHER_DECRYPT:
CryptodevSymStatIncDecrypt(backend, len);
break;
default:
return -VIRTIO_CRYPTO_NOTSUPP;
}
} else {
if ((algtype != CRYPTODEV_BACKEND_ALG_SYM)
&& (algtype != CRYPTODEV_BACKEND_ALG_ASYM)) {
error_report("Unsupported cryptodev alg type: %" PRIu32 "", algtype);
return -VIRTIO_CRYPTO_NOTSUPP;
}
return len;
}
static void cryptodev_backend_throttle_timer_cb(void *opaque)
{
CryptoDevBackend *backend = (CryptoDevBackend *)opaque;
CryptoDevBackendOpInfo *op_info, *tmpop;
int ret;
QTAILQ_FOREACH_SAFE(op_info, &backend->opinfos, next, tmpop) {
QTAILQ_REMOVE(&backend->opinfos, op_info, next);
ret = cryptodev_backend_account(backend, op_info);
if (ret < 0) {
op_info->cb(op_info->opaque, ret);
continue;
}
throttle_account(&backend->ts, true, ret);
cryptodev_backend_operation(backend, op_info);
if (throttle_enabled(&backend->tc) &&
throttle_schedule_timer(&backend->ts, &backend->tt, true)) {
break;
}
}
}
int cryptodev_backend_crypto_operation(
CryptoDevBackend *backend,
CryptoDevBackendOpInfo *op_info)
{
int ret;
if (!throttle_enabled(&backend->tc)) {
goto do_account;
}
if (throttle_schedule_timer(&backend->ts, &backend->tt, true) ||
!QTAILQ_EMPTY(&backend->opinfos)) {
QTAILQ_INSERT_TAIL(&backend->opinfos, op_info, next);
return 0;
}
do_account:
ret = cryptodev_backend_account(backend, op_info);
if (ret < 0) {
return ret;
}
throttle_account(&backend->ts, true, ret);
return cryptodev_backend_operation(backend, op_info);
return cryptodev_backend_operation(backend, op_info, queue_index,
cb, opaque2);
}
static void
@@ -306,111 +169,15 @@ cryptodev_backend_set_queues(Object *obj, Visitor *v, const char *name,
backend->conf.peers.queues = value;
}
static void cryptodev_backend_set_throttle(CryptoDevBackend *backend, int field,
uint64_t value, Error **errp)
{
uint64_t orig = backend->tc.buckets[field].avg;
bool enabled = throttle_enabled(&backend->tc);
if (orig == value) {
return;
}
backend->tc.buckets[field].avg = value;
if (!throttle_enabled(&backend->tc)) {
throttle_timers_destroy(&backend->tt);
cryptodev_backend_throttle_timer_cb(backend); /* drain opinfos */
return;
}
if (!throttle_is_valid(&backend->tc, errp)) {
backend->tc.buckets[field].avg = orig; /* revert change */
return;
}
if (!enabled) {
throttle_init(&backend->ts);
throttle_timers_init(&backend->tt, qemu_get_aio_context(),
QEMU_CLOCK_REALTIME,
cryptodev_backend_throttle_timer_cb, /* FIXME */
cryptodev_backend_throttle_timer_cb, backend);
}
throttle_config(&backend->ts, QEMU_CLOCK_REALTIME, &backend->tc);
}
static void cryptodev_backend_get_bps(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint64_t value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
visit_type_uint64(v, name, &value, errp);
}
static void cryptodev_backend_set_bps(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint64_t value;
if (!visit_type_uint64(v, name, &value, errp)) {
return;
}
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
}
static void cryptodev_backend_get_ops(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint64_t value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
visit_type_uint64(v, name, &value, errp);
}
static void cryptodev_backend_set_ops(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
uint64_t value;
if (!visit_type_uint64(v, name, &value, errp)) {
return;
}
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
}
static void
cryptodev_backend_complete(UserCreatable *uc, Error **errp)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(uc);
CryptoDevBackendClass *bc = CRYPTODEV_BACKEND_GET_CLASS(uc);
uint32_t services;
uint64_t value;
QTAILQ_INIT(&backend->opinfos);
value = backend->tc.buckets[THROTTLE_OPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_OPS_TOTAL, value, errp);
value = backend->tc.buckets[THROTTLE_BPS_TOTAL].avg;
cryptodev_backend_set_throttle(backend, THROTTLE_BPS_TOTAL, value, errp);
if (bc->init) {
bc->init(backend, errp);
}
services = backend->conf.crypto_services;
if (services & (1 << QCRYPTODEV_BACKEND_SERVICE_CIPHER)) {
backend->sym_stat = g_new0(CryptodevBackendSymStat, 1);
}
if (services & (1 << QCRYPTODEV_BACKEND_SERVICE_AKCIPHER)) {
backend->asym_stat = g_new0(CryptodevBackendAsymStat, 1);
}
}
void cryptodev_backend_set_used(CryptoDevBackend *backend, bool used)
@@ -441,12 +208,8 @@ cryptodev_backend_can_be_deleted(UserCreatable *uc)
static void cryptodev_backend_instance_init(Object *obj)
{
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
/* Initialize devices' queues property to 1 */
object_property_set_int(obj, "queues", 1, NULL);
throttle_config_init(&backend->tc);
}
static void cryptodev_backend_finalize(Object *obj)
@@ -454,137 +217,6 @@ static void cryptodev_backend_finalize(Object *obj)
CryptoDevBackend *backend = CRYPTODEV_BACKEND(obj);
cryptodev_backend_cleanup(backend, NULL);
if (throttle_enabled(&backend->tc)) {
throttle_timers_destroy(&backend->tt);
}
}
static StatsList *cryptodev_backend_stats_add(const char *name, int64_t *val,
StatsList *stats_list)
{
Stats *stats = g_new0(Stats, 1);
stats->name = g_strdup(name);
stats->value = g_new0(StatsValue, 1);
stats->value->type = QTYPE_QNUM;
stats->value->u.scalar = *val;
QAPI_LIST_PREPEND(stats_list, stats);
return stats_list;
}
static int cryptodev_backend_stats_query(Object *obj, void *data)
{
StatsArgs *stats_args = data;
StatsResultList **stats_results = stats_args->result.stats;
StatsList *stats_list = NULL;
StatsResult *entry;
CryptoDevBackend *backend;
CryptodevBackendSymStat *sym_stat;
CryptodevBackendAsymStat *asym_stat;
if (!object_dynamic_cast(obj, TYPE_CRYPTODEV_BACKEND)) {
return 0;
}
backend = CRYPTODEV_BACKEND(obj);
sym_stat = backend->sym_stat;
if (sym_stat) {
stats_list = cryptodev_backend_stats_add(SYM_ENCRYPT_OPS_STR,
&sym_stat->encrypt_ops, stats_list);
stats_list = cryptodev_backend_stats_add(SYM_DECRYPT_OPS_STR,
&sym_stat->decrypt_ops, stats_list);
stats_list = cryptodev_backend_stats_add(SYM_ENCRYPT_BYTES_STR,
&sym_stat->encrypt_bytes, stats_list);
stats_list = cryptodev_backend_stats_add(SYM_DECRYPT_BYTES_STR,
&sym_stat->decrypt_bytes, stats_list);
}
asym_stat = backend->asym_stat;
if (asym_stat) {
stats_list = cryptodev_backend_stats_add(ASYM_ENCRYPT_OPS_STR,
&asym_stat->encrypt_ops, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_DECRYPT_OPS_STR,
&asym_stat->decrypt_ops, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_SIGN_OPS_STR,
&asym_stat->sign_ops, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_VERIFY_OPS_STR,
&asym_stat->verify_ops, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_ENCRYPT_BYTES_STR,
&asym_stat->encrypt_bytes, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_DECRYPT_BYTES_STR,
&asym_stat->decrypt_bytes, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_SIGN_BYTES_STR,
&asym_stat->sign_bytes, stats_list);
stats_list = cryptodev_backend_stats_add(ASYM_VERIFY_BYTES_STR,
&asym_stat->verify_bytes, stats_list);
}
entry = g_new0(StatsResult, 1);
entry->provider = STATS_PROVIDER_CRYPTODEV;
entry->qom_path = g_strdup(object_get_canonical_path(obj));
entry->stats = stats_list;
QAPI_LIST_PREPEND(*stats_results, entry);
return 0;
}
static void cryptodev_backend_stats_cb(StatsResultList **result,
StatsTarget target,
strList *names, strList *targets,
Error **errp)
{
switch (target) {
case STATS_TARGET_CRYPTODEV:
{
Object *objs = container_get(object_get_root(), "/objects");
StatsArgs stats_args;
stats_args.result.stats = result;
stats_args.names = names;
stats_args.errp = errp;
object_child_foreach(objs, cryptodev_backend_stats_query, &stats_args);
break;
}
default:
break;
}
}
static StatsSchemaValueList *cryptodev_backend_schemas_add(const char *name,
StatsSchemaValueList *list)
{
StatsSchemaValueList *schema_entry = g_new0(StatsSchemaValueList, 1);
schema_entry->value = g_new0(StatsSchemaValue, 1);
schema_entry->value->type = STATS_TYPE_CUMULATIVE;
schema_entry->value->name = g_strdup(name);
schema_entry->next = list;
return schema_entry;
}
static void cryptodev_backend_schemas_cb(StatsSchemaList **result,
Error **errp)
{
StatsSchemaValueList *stats_list = NULL;
const char *sym_stats[] = { SYM_ENCRYPT_OPS_STR, SYM_DECRYPT_OPS_STR,
SYM_ENCRYPT_BYTES_STR, SYM_DECRYPT_BYTES_STR };
const char *asym_stats[] = { ASYM_ENCRYPT_OPS_STR, ASYM_DECRYPT_OPS_STR,
ASYM_SIGN_OPS_STR, ASYM_VERIFY_OPS_STR,
ASYM_ENCRYPT_BYTES_STR, ASYM_DECRYPT_BYTES_STR,
ASYM_SIGN_BYTES_STR, ASYM_VERIFY_BYTES_STR };
for (int i = 0; i < ARRAY_SIZE(sym_stats); i++) {
stats_list = cryptodev_backend_schemas_add(sym_stats[i], stats_list);
}
for (int i = 0; i < ARRAY_SIZE(asym_stats); i++) {
stats_list = cryptodev_backend_schemas_add(asym_stats[i], stats_list);
}
add_stats_schema(result, STATS_PROVIDER_CRYPTODEV, STATS_TARGET_CRYPTODEV,
stats_list);
}
static void
@@ -600,17 +232,6 @@ cryptodev_backend_class_init(ObjectClass *oc, void *data)
cryptodev_backend_get_queues,
cryptodev_backend_set_queues,
NULL, NULL);
object_class_property_add(oc, "throttle-bps", "uint64",
cryptodev_backend_get_bps,
cryptodev_backend_set_bps,
NULL, NULL);
object_class_property_add(oc, "throttle-ops", "uint64",
cryptodev_backend_get_ops,
cryptodev_backend_set_ops,
NULL, NULL);
add_stats_callbacks(STATS_PROVIDER_CRYPTODEV, cryptodev_backend_stats_cb,
cryptodev_backend_schemas_cb);
}
static const TypeInfo cryptodev_backend_info = {

View File

@@ -1,6 +1,5 @@
softmmu_ss.add([files(
'cryptodev-builtin.c',
'cryptodev-hmp-cmds.c',
'cryptodev.c',
'hostmem-ram.c',
'hostmem.c',

View File

@@ -573,13 +573,13 @@ static int tpm_emulator_prepare_data_fd(TPMEmulator *tpm_emu)
goto err_exit;
}
close(fds[1]);
closesocket(fds[1]);
return 0;
err_exit:
close(fds[0]);
close(fds[1]);
closesocket(fds[0]);
closesocket(fds[1]);
return -1;
}

View File

@@ -5879,10 +5879,9 @@ int64_t coroutine_fn bdrv_co_getlength(BlockDriverState *bs)
}
/* return 0 as number of sectors if no device present or error */
void coroutine_fn bdrv_co_get_geometry(BlockDriverState *bs,
uint64_t *nb_sectors_ptr)
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr)
{
int64_t nb_sectors = bdrv_co_nb_sectors(bs);
int64_t nb_sectors = bdrv_nb_sectors(bs);
IO_CODE();
*nb_sectors_ptr = nb_sectors < 0 ? 0 : nb_sectors;

View File

@@ -1615,16 +1615,13 @@ int64_t coroutine_fn blk_co_getlength(BlockBackend *blk)
return bdrv_co_getlength(blk_bs(blk));
}
void coroutine_fn blk_co_get_geometry(BlockBackend *blk,
uint64_t *nb_sectors_ptr)
void blk_get_geometry(BlockBackend *blk, uint64_t *nb_sectors_ptr)
{
IO_CODE();
GRAPH_RDLOCK_GUARD();
if (!blk_bs(blk)) {
*nb_sectors_ptr = 0;
} else {
bdrv_co_get_geometry(blk_bs(blk), nb_sectors_ptr);
bdrv_get_geometry(blk_bs(blk), nb_sectors_ptr);
}
}

View File

@@ -673,16 +673,7 @@ static void fuse_fallocate(fuse_req_t req, fuse_ino_t inode, int mode,
do {
int size = MIN(length, BDRV_REQUEST_MAX_BYTES);
ret = blk_pwrite_zeroes(exp->common.blk, offset, size,
BDRV_REQ_MAY_UNMAP | BDRV_REQ_NO_FALLBACK);
if (ret == -ENOTSUP) {
/*
* fallocate() specifies to return EOPNOTSUPP for unsupported
* operations
*/
ret = -EOPNOTSUPP;
}
ret = blk_pdiscard(exp->common.blk, offset, size);
offset += size;
length -= size;
} while (ret == 0 && length > 0);

View File

@@ -22,9 +22,8 @@ struct virtio_blk_inhdr {
unsigned char status;
};
static bool coroutine_fn
virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
uint64_t sector, size_t size)
static bool virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
uint64_t sector, size_t size)
{
uint64_t nb_sectors;
uint64_t total_sectors;
@@ -42,7 +41,7 @@ virtio_blk_sect_range_ok(BlockBackend *blk, uint32_t block_size,
if ((sector << VIRTIO_BLK_SECTOR_BITS) % block_size) {
return false;
}
blk_co_get_geometry(blk, &total_sectors);
blk_get_geometry(blk, &total_sectors);
if (sector > total_sectors || nb_sectors > total_sectors - sector) {
return false;
}

View File

@@ -48,7 +48,6 @@
#include "qemu/option.h"
#include "qemu/sockets.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "sysemu/sysemu.h"
#include "monitor/monitor.h"
#include "monitor/hmp.h"

View File

@@ -594,6 +594,7 @@ static int bdrv_qed_open(BlockDriverState *bs, QDict *options, int flags,
qemu_coroutine_enter(qemu_coroutine_create(bdrv_qed_open_entry, &qoc));
BDRV_POLL_WHILE(bs, qoc.ret == -EINPROGRESS);
}
BDRV_POLL_WHILE(bs, qoc.ret == -EINPROGRESS);
return qoc.ret;
}

View File

@@ -38,8 +38,6 @@
#include <sys/sysctl.h>
#include <utime.h>
#include "include/gdbstub/syscalls.h"
#include "qemu.h"
#include "signal-common.h"
#include "user/syscall-trace.h"

View File

@@ -44,7 +44,6 @@
#include "trace/control.h"
#include "crypto/init.h"
#include "qemu/guest-random.h"
#include "gdbstub/user.h"
#include "host-os.h"
#include "target_arch_cpu.h"

View File

@@ -21,7 +21,6 @@
#include "qemu/osdep.h"
#include "qemu/log.h"
#include "qemu.h"
#include "gdbstub/user.h"
#include "signal-common.h"
#include "trace.h"
#include "hw/core/tcg-cpu-ops.h"

View File

@@ -44,39 +44,39 @@
#define ESC 0x1B
#define BAUM_REQ_DisplayData 0x01
#define BAUM_REQ_GetVersionNumber 0x05
#define BAUM_REQ_GetKeys 0x08
#define BAUM_REQ_SetMode 0x12
#define BAUM_REQ_SetProtocol 0x15
#define BAUM_REQ_GetDeviceIdentity 0x84
#define BAUM_REQ_GetSerialNumber 0x8A
#define BAUM_REQ_DisplayData 0x01
#define BAUM_REQ_GetVersionNumber 0x05
#define BAUM_REQ_GetKeys 0x08
#define BAUM_REQ_SetMode 0x12
#define BAUM_REQ_SetProtocol 0x15
#define BAUM_REQ_GetDeviceIdentity 0x84
#define BAUM_REQ_GetSerialNumber 0x8A
#define BAUM_RSP_CellCount 0x01
#define BAUM_RSP_VersionNumber 0x05
#define BAUM_RSP_ModeSetting 0x11
#define BAUM_RSP_CommunicationChannel 0x16
#define BAUM_RSP_PowerdownSignal 0x17
#define BAUM_RSP_HorizontalSensors 0x20
#define BAUM_RSP_VerticalSensors 0x21
#define BAUM_RSP_RoutingKeys 0x22
#define BAUM_RSP_Switches 0x23
#define BAUM_RSP_TopKeys 0x24
#define BAUM_RSP_HorizontalSensor 0x25
#define BAUM_RSP_VerticalSensor 0x26
#define BAUM_RSP_RoutingKey 0x27
#define BAUM_RSP_FrontKeys6 0x28
#define BAUM_RSP_BackKeys6 0x29
#define BAUM_RSP_CommandKeys 0x2B
#define BAUM_RSP_FrontKeys10 0x2C
#define BAUM_RSP_BackKeys10 0x2D
#define BAUM_RSP_EntryKeys 0x33
#define BAUM_RSP_JoyStick 0x34
#define BAUM_RSP_ErrorCode 0x40
#define BAUM_RSP_InfoBlock 0x42
#define BAUM_RSP_DeviceIdentity 0x84
#define BAUM_RSP_SerialNumber 0x8A
#define BAUM_RSP_BluetoothName 0x8C
#define BAUM_RSP_CellCount 0x01
#define BAUM_RSP_VersionNumber 0x05
#define BAUM_RSP_ModeSetting 0x11
#define BAUM_RSP_CommunicationChannel 0x16
#define BAUM_RSP_PowerdownSignal 0x17
#define BAUM_RSP_HorizontalSensors 0x20
#define BAUM_RSP_VerticalSensors 0x21
#define BAUM_RSP_RoutingKeys 0x22
#define BAUM_RSP_Switches 0x23
#define BAUM_RSP_TopKeys 0x24
#define BAUM_RSP_HorizontalSensor 0x25
#define BAUM_RSP_VerticalSensor 0x26
#define BAUM_RSP_RoutingKey 0x27
#define BAUM_RSP_FrontKeys6 0x28
#define BAUM_RSP_BackKeys6 0x29
#define BAUM_RSP_CommandKeys 0x2B
#define BAUM_RSP_FrontKeys10 0x2C
#define BAUM_RSP_BackKeys10 0x2D
#define BAUM_RSP_EntryKeys 0x33
#define BAUM_RSP_JoyStick 0x34
#define BAUM_RSP_ErrorCode 0x40
#define BAUM_RSP_InfoBlock 0x42
#define BAUM_RSP_DeviceIdentity 0x84
#define BAUM_RSP_SerialNumber 0x8A
#define BAUM_RSP_BluetoothName 0x8C
#define BAUM_TL1 0x01
#define BAUM_TL2 0x02

View File

@@ -1175,10 +1175,12 @@ bool qmp_add_client_char(int fd, bool has_skipauth, bool skipauth,
if (!s) {
error_setg(errp, "protocol '%s' is invalid", protocol);
close(fd);
return false;
}
if (qemu_chr_add_client(s, fd) < 0) {
error_setg(errp, "failed to add client");
close(fd);
return false;
}
return true;

View File

@@ -1,6 +1,6 @@
TARGET_ARCH=aarch64
TARGET_BASE_ARCH=arm
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml gdb-xml/aarch64-pauth.xml
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml
TARGET_HAS_BFLT=y
CONFIG_SEMIHOSTING=y
CONFIG_ARM_COMPATIBLE_SEMIHOSTING=y

View File

@@ -1,5 +1,5 @@
TARGET_ARCH=aarch64
TARGET_BASE_ARCH=arm
TARGET_SUPPORTS_MTTCG=y
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml gdb-xml/arm-core.xml gdb-xml/arm-vfp.xml gdb-xml/arm-vfp3.xml gdb-xml/arm-vfp-sysregs.xml gdb-xml/arm-neon.xml gdb-xml/arm-m-profile.xml gdb-xml/arm-m-profile-mve.xml gdb-xml/aarch64-pauth.xml
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml gdb-xml/arm-core.xml gdb-xml/arm-vfp.xml gdb-xml/arm-vfp3.xml gdb-xml/arm-vfp-sysregs.xml gdb-xml/arm-neon.xml gdb-xml/arm-m-profile.xml gdb-xml/arm-m-profile-mve.xml
TARGET_NEED_FDT=y

View File

@@ -1,7 +1,7 @@
TARGET_ARCH=aarch64
TARGET_BASE_ARCH=arm
TARGET_BIG_ENDIAN=y
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml gdb-xml/aarch64-pauth.xml
TARGET_XML_FILES= gdb-xml/aarch64-core.xml gdb-xml/aarch64-fpu.xml
TARGET_HAS_BFLT=y
CONFIG_SEMIHOSTING=y
CONFIG_ARM_COMPATIBLE_SEMIHOSTING=y

9
configure vendored
View File

@@ -230,7 +230,6 @@ stack_protector=""
safe_stack=""
use_containers="yes"
gdb_bin=$(command -v "gdb-multiarch" || command -v "gdb")
gdb_arches=""
if test -e "$source_path/.git"
then
@@ -2263,7 +2262,6 @@ fi
# tests might fail. Prefer to keep the relevant files in their own
# directory and symlink the directory instead.
LINKS="Makefile"
LINKS="$LINKS docs/config"
LINKS="$LINKS pc-bios/optionrom/Makefile"
LINKS="$LINKS pc-bios/s390-ccw/Makefile"
LINKS="$LINKS pc-bios/vof/Makefile"
@@ -2397,7 +2395,6 @@ if test -n "$gdb_bin"; then
gdb_version=$($gdb_bin --version | head -n 1)
if version_ge ${gdb_version##* } 9.1; then
echo "HAVE_GDB_BIN=$gdb_bin" >> $config_host_mak
gdb_arches=$("$source_path/scripts/probe-gdb-support.py" $gdb_bin)
else
gdb_bin=""
fi
@@ -2522,12 +2519,6 @@ for target in $target_list; do
write_target_makefile "build-tcg-tests-$target" >> "$config_target_mak"
echo "BUILD_STATIC=$build_static" >> "$config_target_mak"
echo "QEMU=$PWD/$qemu" >> "$config_target_mak"
# will GDB work with these binaries?
if test "${gdb_arches#*$arch}" != "$gdb_arches"; then
echo "HOST_GDB_SUPPORTS_ARCH=y" >> "$config_target_mak"
fi
echo "run-tcg-tests-$target: $qemu\$(EXESUF)" >> Makefile.prereqs
tcg_tests_targets="$tcg_tests_targets $target"
fi

View File

@@ -11,7 +11,6 @@
static struct pa_block *pa_space_find_block(struct pa_space *ps, uint64_t pa)
{
size_t i;
for (i = 0; i < ps->block_nr; i++) {
if (ps->block[i].paddr <= pa &&
pa <= ps->block[i].paddr + ps->block[i].size) {

View File

@@ -17,7 +17,6 @@
#define SYM_URL_BASE "https://msdl.microsoft.com/download/symbols/"
#define PDB_NAME "ntkrnlmp.pdb"
#define PE_NAME "ntoskrnl.exe"
#define INITIAL_MXCSR 0x1f80
@@ -283,16 +282,14 @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
};
for (i = 0; i < ps->block_nr; i++) {
h.PhysicalMemoryBlock.NumberOfPages +=
ps->block[i].size / ELF2DMP_PAGE_SIZE;
h.PhysicalMemoryBlock.NumberOfPages += ps->block[i].size / ELF2DMP_PAGE_SIZE;
h.PhysicalMemoryBlock.Run[i] = (WinDumpPhyMemRun64) {
.BasePage = ps->block[i].paddr / ELF2DMP_PAGE_SIZE,
.PageCount = ps->block[i].size / ELF2DMP_PAGE_SIZE,
};
}
h.RequiredDumpSpace +=
h.PhysicalMemoryBlock.NumberOfPages << ELF2DMP_PAGE_BITS;
h.RequiredDumpSpace += h.PhysicalMemoryBlock.NumberOfPages << ELF2DMP_PAGE_BITS;
*hdr = h;
@@ -302,8 +299,7 @@ static int fill_header(WinDumpHeader64 *hdr, struct pa_space *ps,
static int fill_context(KDDEBUGGER_DATA64 *kdbg,
struct va_space *vs, QEMU_Elf *qe)
{
int i;
int i;
for (i = 0; i < qe->state_nr; i++) {
uint64_t Prcb;
uint64_t Context;
@@ -334,45 +330,6 @@ static int fill_context(KDDEBUGGER_DATA64 *kdbg,
return 0;
}
static int pe_get_data_dir_entry(uint64_t base, void *start_addr, int idx,
void *entry, size_t size, struct va_space *vs)
{
const char e_magic[2] = "MZ";
const char Signature[4] = "PE\0\0";
IMAGE_DOS_HEADER *dos_hdr = start_addr;
IMAGE_NT_HEADERS64 nt_hdrs;
IMAGE_FILE_HEADER *file_hdr = &nt_hdrs.FileHeader;
IMAGE_OPTIONAL_HEADER64 *opt_hdr = &nt_hdrs.OptionalHeader;
IMAGE_DATA_DIRECTORY *data_dir = nt_hdrs.OptionalHeader.DataDirectory;
QEMU_BUILD_BUG_ON(sizeof(*dos_hdr) >= ELF2DMP_PAGE_SIZE);
if (memcmp(&dos_hdr->e_magic, e_magic, sizeof(e_magic))) {
return 1;
}
if (va_space_rw(vs, base + dos_hdr->e_lfanew,
&nt_hdrs, sizeof(nt_hdrs), 0)) {
return 1;
}
if (memcmp(&nt_hdrs.Signature, Signature, sizeof(Signature)) ||
file_hdr->Machine != 0x8664 || opt_hdr->Magic != 0x020b) {
return 1;
}
if (va_space_rw(vs,
base + data_dir[idx].VirtualAddress,
entry, size, 0)) {
return 1;
}
printf("Data directory entry #%d: RVA = 0x%08"PRIx32"\n", idx,
(uint32_t)data_dir[idx].VirtualAddress);
return 0;
}
static int write_dump(struct pa_space *ps,
WinDumpHeader64 *hdr, const char *name)
{
@@ -406,38 +363,45 @@ static int write_dump(struct pa_space *ps,
return fclose(dmp_file);
}
static bool pe_check_export_name(uint64_t base, void *start_addr,
struct va_space *vs)
{
IMAGE_EXPORT_DIRECTORY export_dir;
const char *pe_name;
if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_EXPORT_DIRECTORY,
&export_dir, sizeof(export_dir), vs)) {
return false;
}
pe_name = va_space_resolve(vs, base + export_dir.Name);
if (!pe_name) {
return false;
}
return !strcmp(pe_name, PE_NAME);
}
static int pe_get_pdb_symstore_hash(uint64_t base, void *start_addr,
char *hash, struct va_space *vs)
{
const char e_magic[2] = "MZ";
const char Signature[4] = "PE\0\0";
const char sign_rsds[4] = "RSDS";
IMAGE_DOS_HEADER *dos_hdr = start_addr;
IMAGE_NT_HEADERS64 nt_hdrs;
IMAGE_FILE_HEADER *file_hdr = &nt_hdrs.FileHeader;
IMAGE_OPTIONAL_HEADER64 *opt_hdr = &nt_hdrs.OptionalHeader;
IMAGE_DATA_DIRECTORY *data_dir = nt_hdrs.OptionalHeader.DataDirectory;
IMAGE_DEBUG_DIRECTORY debug_dir;
OMFSignatureRSDS rsds;
char *pdb_name;
size_t pdb_name_sz;
size_t i;
if (pe_get_data_dir_entry(base, start_addr, IMAGE_FILE_DEBUG_DIRECTORY,
&debug_dir, sizeof(debug_dir), vs)) {
eprintf("Failed to get Debug Directory\n");
QEMU_BUILD_BUG_ON(sizeof(*dos_hdr) >= ELF2DMP_PAGE_SIZE);
if (memcmp(&dos_hdr->e_magic, e_magic, sizeof(e_magic))) {
return 1;
}
if (va_space_rw(vs, base + dos_hdr->e_lfanew,
&nt_hdrs, sizeof(nt_hdrs), 0)) {
return 1;
}
if (memcmp(&nt_hdrs.Signature, Signature, sizeof(Signature)) ||
file_hdr->Machine != 0x8664 || opt_hdr->Magic != 0x020b) {
return 1;
}
printf("Debug Directory RVA = 0x%08"PRIx32"\n",
(uint32_t)data_dir[IMAGE_FILE_DEBUG_DIRECTORY].VirtualAddress);
if (va_space_rw(vs,
base + data_dir[IMAGE_FILE_DEBUG_DIRECTORY].VirtualAddress,
&debug_dir, sizeof(debug_dir), 0)) {
return 1;
}
@@ -509,7 +473,6 @@ int main(int argc, char *argv[])
uint64_t KdDebuggerDataBlock;
KDDEBUGGER_DATA64 *kdbg;
uint64_t KdVersionBlock;
bool kernel_found = false;
if (argc != 3) {
eprintf("usage:\n\t%s elf_file dmp_file\n", argv[0]);
@@ -557,14 +520,11 @@ int main(int argc, char *argv[])
}
if (*(uint16_t *)nt_start_addr == 0x5a4d) { /* MZ */
if (pe_check_export_name(KernBase, nt_start_addr, &vs)) {
kernel_found = true;
break;
}
break;
}
}
if (!kernel_found) {
if (!nt_start_addr) {
eprintf("Failed to find NT kernel image\n");
err = 1;
goto out_ps;

View File

@@ -33,90 +33,75 @@ typedef struct IMAGE_DOS_HEADER {
} __attribute__ ((packed)) IMAGE_DOS_HEADER;
typedef struct IMAGE_FILE_HEADER {
uint16_t Machine;
uint16_t NumberOfSections;
uint32_t TimeDateStamp;
uint32_t PointerToSymbolTable;
uint32_t NumberOfSymbols;
uint16_t SizeOfOptionalHeader;
uint16_t Characteristics;
uint16_t Machine;
uint16_t NumberOfSections;
uint32_t TimeDateStamp;
uint32_t PointerToSymbolTable;
uint32_t NumberOfSymbols;
uint16_t SizeOfOptionalHeader;
uint16_t Characteristics;
} __attribute__ ((packed)) IMAGE_FILE_HEADER;
typedef struct IMAGE_DATA_DIRECTORY {
uint32_t VirtualAddress;
uint32_t Size;
uint32_t VirtualAddress;
uint32_t Size;
} __attribute__ ((packed)) IMAGE_DATA_DIRECTORY;
#define IMAGE_NUMBEROF_DIRECTORY_ENTRIES 16
typedef struct IMAGE_OPTIONAL_HEADER64 {
uint16_t Magic; /* 0x20b */
uint8_t MajorLinkerVersion;
uint8_t MinorLinkerVersion;
uint32_t SizeOfCode;
uint32_t SizeOfInitializedData;
uint32_t SizeOfUninitializedData;
uint32_t AddressOfEntryPoint;
uint32_t BaseOfCode;
uint64_t ImageBase;
uint32_t SectionAlignment;
uint32_t FileAlignment;
uint16_t MajorOperatingSystemVersion;
uint16_t MinorOperatingSystemVersion;
uint16_t MajorImageVersion;
uint16_t MinorImageVersion;
uint16_t MajorSubsystemVersion;
uint16_t MinorSubsystemVersion;
uint32_t Win32VersionValue;
uint32_t SizeOfImage;
uint32_t SizeOfHeaders;
uint32_t CheckSum;
uint16_t Subsystem;
uint16_t DllCharacteristics;
uint64_t SizeOfStackReserve;
uint64_t SizeOfStackCommit;
uint64_t SizeOfHeapReserve;
uint64_t SizeOfHeapCommit;
uint32_t LoaderFlags;
uint32_t NumberOfRvaAndSizes;
IMAGE_DATA_DIRECTORY DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES];
uint16_t Magic; /* 0x20b */
uint8_t MajorLinkerVersion;
uint8_t MinorLinkerVersion;
uint32_t SizeOfCode;
uint32_t SizeOfInitializedData;
uint32_t SizeOfUninitializedData;
uint32_t AddressOfEntryPoint;
uint32_t BaseOfCode;
uint64_t ImageBase;
uint32_t SectionAlignment;
uint32_t FileAlignment;
uint16_t MajorOperatingSystemVersion;
uint16_t MinorOperatingSystemVersion;
uint16_t MajorImageVersion;
uint16_t MinorImageVersion;
uint16_t MajorSubsystemVersion;
uint16_t MinorSubsystemVersion;
uint32_t Win32VersionValue;
uint32_t SizeOfImage;
uint32_t SizeOfHeaders;
uint32_t CheckSum;
uint16_t Subsystem;
uint16_t DllCharacteristics;
uint64_t SizeOfStackReserve;
uint64_t SizeOfStackCommit;
uint64_t SizeOfHeapReserve;
uint64_t SizeOfHeapCommit;
uint32_t LoaderFlags;
uint32_t NumberOfRvaAndSizes;
IMAGE_DATA_DIRECTORY DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES];
} __attribute__ ((packed)) IMAGE_OPTIONAL_HEADER64;
typedef struct IMAGE_NT_HEADERS64 {
uint32_t Signature;
IMAGE_FILE_HEADER FileHeader;
IMAGE_OPTIONAL_HEADER64 OptionalHeader;
uint32_t Signature;
IMAGE_FILE_HEADER FileHeader;
IMAGE_OPTIONAL_HEADER64 OptionalHeader;
} __attribute__ ((packed)) IMAGE_NT_HEADERS64;
typedef struct IMAGE_EXPORT_DIRECTORY {
uint32_t Characteristics;
uint32_t TimeDateStamp;
uint16_t MajorVersion;
uint16_t MinorVersion;
uint32_t Name;
uint32_t Base;
uint32_t NumberOfFunctions;
uint32_t NumberOfNames;
uint32_t AddressOfFunctions;
uint32_t AddressOfNames;
uint32_t AddressOfNameOrdinals;
} __attribute__ ((packed)) IMAGE_EXPORT_DIRECTORY;
typedef struct IMAGE_DEBUG_DIRECTORY {
uint32_t Characteristics;
uint32_t TimeDateStamp;
uint16_t MajorVersion;
uint16_t MinorVersion;
uint32_t Type;
uint32_t SizeOfData;
uint32_t AddressOfRawData;
uint32_t PointerToRawData;
uint32_t Characteristics;
uint32_t TimeDateStamp;
uint16_t MajorVersion;
uint16_t MinorVersion;
uint32_t Type;
uint32_t SizeOfData;
uint32_t AddressOfRawData;
uint32_t PointerToRawData;
} __attribute__ ((packed)) IMAGE_DEBUG_DIRECTORY;
#define IMAGE_DEBUG_TYPE_CODEVIEW 2
#endif
#define IMAGE_FILE_EXPORT_DIRECTORY 0
#define IMAGE_FILE_DEBUG_DIRECTORY 6
typedef struct guid_t {

View File

@@ -4,12 +4,7 @@
# This maps email domains to nice easy to read company names
#
linux.alibaba.com Alibaba
amazon.com Amazon
amazon.co.uk Amazon
amazon.de Amazon
amd.com AMD
aspeedtech.com ASPEED Technology Inc.
baidu.com Baidu
bytedance.com ByteDance
cmss.chinamobile.com China Mobile
@@ -17,7 +12,6 @@ citrix.com Citrix
crudebyte.com Crudebyte
chinatelecom.cn China Telecom
eldorado.org.br Instituto de Pesquisas Eldorado
fb.com Facebook
fujitsu.com Fujitsu
google.com Google
greensocs.com GreenSocs
@@ -37,18 +31,15 @@ oracle.com Oracle
proxmox.com Proxmox
quicinc.com Qualcomm Innovation Center
redhat.com Red Hat
rev.ng rev.ng Labs
rt-rk.com RT-RK
samsung.com Samsung
siemens.com Siemens
sifive.com SiFive
suse.com SUSE
suse.de SUSE
syrmia.com SYRMIA
ventanamicro.com Ventana Micro Systems
virtuozzo.com Virtuozzo
vrull.eu VRULL
wdc.com Western Digital
windriver.com Wind River
xilinx.com Xilinx
yadro.com YADRO
yandex-team.ru Yandex

View File

@@ -1,7 +0,0 @@
#
# Alibaba contributors including its subsidiaries
#
# c-sky.com, now part of T-Head, wholly-owned entity of Alibaba Group
ren_guo@c-sky.com
zhiwei_liu@c-sky.com

View File

@@ -1,8 +0,0 @@
# AMD acquired Xilinx and contributors have been slowly updating emails
edgar.iglesias@xilinx.com
fnu.vikram@xilinx.com
francisco.iglesias@xilinx.com
sai.pavan.boddu@xilinx.com
stefano.stabellini@xilinx.com
tong.ho@xilinx.com

View File

@@ -1,5 +0,0 @@
#
# Some Facebook contributors also occasionally use personal email addresses.
#
peter@pjd.dev

View File

@@ -12,4 +12,3 @@ jcfaracco@gmail.com
joel@jms.id.au
sjitindarsingh@gmail.com
tommusta@gmail.com
idan.horowitz@gmail.com

View File

@@ -37,8 +37,3 @@ akihiko.odaki@gmail.com
paul@nowt.org
git@xen0n.name
simon@simonsafar.com
research_trasio@irq.a4lg.com
shentey@gmail.com
bmeng@tinylab.org
strahinja.p.jankovic@gmail.com
Jason@zx2c4.com

13
cpu.c
View File

@@ -31,18 +31,16 @@
#include "hw/core/sysemu-cpu-ops.h"
#include "exec/address-spaces.h"
#endif
#include "sysemu/cpus.h"
#include "sysemu/tcg.h"
#include "sysemu/kvm.h"
#include "exec/replay-core.h"
#include "exec/cpu-common.h"
#include "exec/exec-all.h"
#include "exec/tb-flush.h"
#include "exec/translate-all.h"
#include "exec/log.h"
#include "hw/core/accel-cpu.h"
#include "trace/trace-root.h"
#include "qemu/accel.h"
#include "qemu/plugin.h"
uintptr_t qemu_host_page_size;
intptr_t qemu_host_page_mask;
@@ -327,14 +325,9 @@ void cpu_single_step(CPUState *cpu, int enabled)
{
if (cpu->singlestep_enabled != enabled) {
cpu->singlestep_enabled = enabled;
#if !defined(CONFIG_USER_ONLY)
const AccelOpsClass *ops = cpus_get_accel();
if (ops->update_guest_debug) {
ops->update_guest_debug(cpu);
if (kvm_enabled()) {
kvm_update_guest_debug(cpu, 0);
}
#endif
trace_breakpoint_singlestep(cpu->cpu_index, enabled);
}
}

View File

@@ -59,7 +59,7 @@ qcrypto_afalg_socket_bind(const char *type, const char *name,
if (bind(sbind, (const struct sockaddr *)&salg, sizeof(salg)) != 0) {
error_setg_errno(errp, errno, "Failed to bind socket");
close(sbind);
closesocket(sbind);
return -1;
}
@@ -105,11 +105,11 @@ void qcrypto_afalg_comm_free(QCryptoAFAlg *afalg)
}
if (afalg->tfmfd != -1) {
close(afalg->tfmfd);
closesocket(afalg->tfmfd);
}
if (afalg->opfd != -1) {
close(afalg->opfd);
closesocket(afalg->opfd);
}
g_free(afalg);

View File

@@ -1014,7 +1014,6 @@ static const char rv_vreg_name_sym[32][4] = {
#define rv_fmt_rd_offset "O\t0,o"
#define rv_fmt_rd_rs1_rs2 "O\t0,1,2"
#define rv_fmt_frd_rs1 "O\t3,1"
#define rv_fmt_frd_frs1 "O\t3,4"
#define rv_fmt_rd_frs1 "O\t0,4"
#define rv_fmt_rd_frs1_frs2 "O\t0,4,5"
#define rv_fmt_frd_frs1_frs2 "O\t3,4,5"
@@ -1581,15 +1580,15 @@ const rv_opcode_data opcode_data[] = {
{ "snez", rv_codec_r, rv_fmt_rd_rs2, NULL, 0, 0, 0 },
{ "sltz", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "sgtz", rv_codec_r, rv_fmt_rd_rs2, NULL, 0, 0, 0 },
{ "fmv.s", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fabs.s", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fneg.s", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fmv.d", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fabs.d", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fneg.d", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fmv.q", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fabs.q", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fneg.q", rv_codec_r, rv_fmt_frd_frs1, NULL, 0, 0, 0 },
{ "fmv.s", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fabs.s", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fneg.s", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fmv.d", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fabs.d", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fneg.d", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fmv.q", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fabs.q", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "fneg.q", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "beqz", rv_codec_sb, rv_fmt_rs1_offset, NULL, 0, 0, 0 },
{ "bnez", rv_codec_sb, rv_fmt_rs1_offset, NULL, 0, 0, 0 },
{ "blez", rv_codec_sb, rv_fmt_rs2_offset, NULL, 0, 0, 0 },
@@ -1646,9 +1645,9 @@ const rv_opcode_data opcode_data[] = {
{ "max", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "maxu", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "clzw", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "ctzw", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "clzw", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "cpopw", rv_codec_r, rv_fmt_rd_rs1, NULL, 0, 0, 0 },
{ "slli.uw", rv_codec_i_sh6, rv_fmt_rd_rs1_imm, NULL, 0, 0, 0 },
{ "slli.uw", rv_codec_i_sh5, rv_fmt_rd_rs1_imm, NULL, 0, 0, 0 },
{ "add.uw", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "rolw", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
{ "rorw", rv_codec_r, rv_fmt_rd_rs1_rs2, NULL, 0, 0, 0 },
@@ -2618,10 +2617,10 @@ static void decode_inst_opcode(rv_decode *dec, rv_isa isa)
switch (((inst >> 12) & 0b111)) {
case 0: op = rv_op_addiw; break;
case 1:
switch (((inst >> 26) & 0b111111)) {
switch (((inst >> 25) & 0b1111111)) {
case 0: op = rv_op_slliw; break;
case 2: op = rv_op_slli_uw; break;
case 24:
case 4: op = rv_op_slli_uw; break;
case 48:
switch ((inst >> 20) & 0b11111) {
case 0b00000: op = rv_op_clzw; break;
case 0b00001: op = rv_op_ctzw; break;

View File

@@ -67,8 +67,7 @@ Non-supported architectures may be removed in the future following the
Linux OS, macOS, FreeBSD, NetBSD, OpenBSD
-----------------------------------------
The project aims to support the most recent major version at all times for
up to five years after its initial release. Support
The project aims to support the most recent major version at all times. Support
for the previous major version will be dropped 2 years after the new major
version is released or when the vendor itself drops support, whichever comes
first. In this context, third-party efforts to extend the lifetime of a distro

View File

@@ -196,17 +196,6 @@ CI coverage support may bitrot away before the deprecation process
completes. The little endian variants of MIPS (both 32 and 64 bit) are
still a supported host architecture.
System emulation on 32-bit x86 hosts (since 8.0)
''''''''''''''''''''''''''''''''''''''''''''''''
Support for 32-bit x86 host deployments is increasingly uncommon in mainstream
OS distributions given the widespread availability of 64-bit x86 hardware.
The QEMU project no longer considers 32-bit x86 support for system emulation to
be an effective use of its limited resources, and thus intends to discontinue
it. Since all recent x86 hardware from the past >10 years is capable of the
64-bit x86 extensions, a corresponding 64-bit OS should be used instead.
QEMU API (QAPI) events
----------------------

View File

@@ -56,10 +56,8 @@
[machine]
type = "virt"
gic-version = "host"
[accel]
accel = "kvm"
gic-version = "host"
[memory]
size = "1024"

View File

@@ -62,10 +62,8 @@
[machine]
type = "virt"
gic-version = "host"
[accel]
accel = "kvm"
gic-version = "host"
[memory]
size = "1024"

View File

@@ -61,8 +61,6 @@
[machine]
type = "q35"
[accel]
accel = "kvm"
[memory]

View File

@@ -55,8 +55,6 @@
[machine]
type = "q35"
[accel]
accel = "kvm"
[memory]

View File

@@ -60,8 +60,6 @@
[machine]
type = "q35"
[accel]
accel = "kvm"
[memory]

View File

@@ -27,8 +27,7 @@ provides macros that fall in three camps:
- weak atomic access and manual memory barriers: ``qatomic_read()``,
``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``,
``smp_mb__before_rmw()``, ``smp_mb__after_rmw()``;
``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
- sequentially consistent atomic access: everything else.
@@ -469,19 +468,13 @@ and memory barriers, and the equivalents in QEMU:
In QEMU, the second kind is named ``atomic_OP_fetch``.
- different atomic read-modify-write operations in Linux imply
a different set of memory barriers. In QEMU, all of them enforce
sequential consistency: there is a single order in which the
program sees them happen.
a different set of memory barriers; in QEMU, all of them enforce
sequential consistency.
- however, according to the C11 memory model that QEMU uses, this order
does not propagate to other memory accesses on either side of the
read-modify-write operation. As far as those are concerned, the
operation consist of just a load-acquire followed by a store-release.
Stores that precede the RMW operation, and loads that follow it, can
still be reordered and will happen *in the middle* of the read-modify-write
operation!
Therefore, the following example is correct in Linux but not in QEMU:
- in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
the total ordering enforced by sequentially-consistent operations.
This is because QEMU uses the C11 memory model. The following example
is correct in Linux but not in QEMU:
+----------------------------------+--------------------------------+
| Linux (correct) | QEMU (incorrect) |
@@ -495,24 +488,9 @@ and memory barriers, and the equivalents in QEMU:
because the read of ``y`` can be moved (by either the processor or the
compiler) before the write of ``x``.
Fixing this requires a full memory barrier between the write of ``x`` and
the read of ``y``. QEMU provides ``smp_mb__before_rmw()`` and
``smp_mb__after_rmw()``; they act both as an optimization,
avoiding the memory barrier on processors where it is unnecessary,
and as a clarification of this corner case of the C11 memory model:
+--------------------------------+
| QEMU (correct) |
+================================+
| :: |
| |
| a = qatomic_fetch_add(&x, 2);|
| smp_mb__after_rmw(); |
| b = qatomic_read(&y); |
+--------------------------------+
In the common case where only one thread writes ``x``, it is also possible
to write it like this:
Fixing this requires an ``smp_mb()`` memory barrier between the write
of ``x`` and the read of ``y``. In the common case where only one thread
writes ``x``, it is also possible to write it like this:
+--------------------------------+
| QEMU (correct) |

View File

@@ -59,37 +59,22 @@ System memory dirty pages tracking
----------------------------------
A ``log_global_start`` and ``log_global_stop`` memory listener callback informs
the VFIO dirty tracking module to start and stop dirty page tracking. A
``log_sync`` memory listener callback queries the dirty page bitmap from the
dirty tracking module and marks system memory pages which were DMA-ed by the
VFIO device as dirty. The dirty page bitmap is queried per container.
Currently there are two ways dirty page tracking can be done:
(1) Device dirty tracking:
In this method the device is responsible to log and report its DMAs. This
method can be used only if the device is capable of tracking its DMAs.
Discovering device capability, starting and stopping dirty tracking, and
syncing the dirty bitmaps from the device are done using the DMA logging uAPI.
More info about the uAPI can be found in the comments of the
``vfio_device_feature_dma_logging_control`` and
``vfio_device_feature_dma_logging_report`` structures in the header file
linux-headers/linux/vfio.h.
(2) VFIO IOMMU module:
In this method dirty tracking is done by IOMMU. However, there is currently no
IOMMU support for dirty page tracking. For this reason, all pages are
perpetually marked dirty, unless the device driver pins pages through external
APIs in which case only those pinned pages are perpetually marked dirty.
If the above two methods are not supported, all pages are perpetually marked
dirty by QEMU.
the VFIO IOMMU module to start and stop dirty page tracking. A ``log_sync``
memory listener callback marks those system memory pages as dirty which are
used for DMA by the VFIO device. The dirty pages bitmap is queried per
container. All pages pinned by the vendor driver through external APIs have to
be marked as dirty during migration. When there are CPU writes, CPU dirty page
tracking can identify dirtied pages, but any page pinned by the vendor driver
can also be written by the device. There is currently no device or IOMMU
support for dirty page tracking in hardware.
By default, dirty pages are tracked during pre-copy as well as stop-and-copy
phase. So, a page marked as dirty will be copied to the destination in both
phases. Copying dirty pages in pre-copy phase helps QEMU to predict if it can
achieve its downtime tolerances. If QEMU during pre-copy phase keeps finding
dirty pages continuously, then it understands that even in stop-and-copy phase,
it is likely to find dirty pages and can predict the downtime accordingly.
phase. So, a page pinned by the vendor driver will be copied to the destination
in both phases. Copying dirty pages in pre-copy phase helps QEMU to predict if
it can achieve its downtime tolerances. If QEMU during pre-copy phase keeps
finding dirty pages continuously, then it understands that even in stop-and-copy
phase, it is likely to find dirty pages and can predict the downtime
accordingly.
QEMU also provides a per device opt-out option ``pre-copy-dirty-page-tracking``
which disables querying the dirty bitmap during pre-copy phase. If it is set to
@@ -104,8 +89,7 @@ phase of migration. In that case, the unmap ioctl returns any dirty pages in
that range and QEMU reports corresponding guest physical pages dirty. During
stop-and-copy phase, an IOMMU notifier is used to get a callback for mapped
pages and then dirty pages bitmap is fetched from VFIO IOMMU modules for those
mapped ranges. If device dirty tracking is enabled with vIOMMU, live migration
will be blocked.
mapped ranges.
Flow of state changes during Live migration
===========================================

View File

@@ -7,7 +7,7 @@ if sphinx_build.found()
SPHINX_ARGS = ['env', 'CONFDIR=' + qemu_confdir, sphinx_build, '-q']
# If we're making warnings fatal, apply this to Sphinx runs as well
if get_option('werror')
SPHINX_ARGS += [ '-W', '-Dkerneldoc_werror=1' ]
SPHINX_ARGS += [ '-W' ]
endif
# This is a bit awkward but works: create a trivial document and

View File

@@ -74,10 +74,6 @@ class KernelDocDirective(Directive):
# Sphinx versions
cmd += ['-sphinx-version', sphinx.__version__]
# Pass through the warnings-as-errors flag
if env.config.kerneldoc_werror:
cmd += ['-Werror']
filename = env.config.kerneldoc_srctree + '/' + self.arguments[0]
export_file_patterns = []
@@ -171,7 +167,6 @@ def setup(app):
app.add_config_value('kerneldoc_bin', None, 'env')
app.add_config_value('kerneldoc_srctree', None, 'env')
app.add_config_value('kerneldoc_verbosity', 1, 'env')
app.add_config_value('kerneldoc_werror', 0, 'env')
app.add_directive('kernel-doc', KernelDocDirective)

View File

@@ -177,32 +177,39 @@ are named with the prefix "kvm-". KVM VCPU features may be probed,
enabled, and disabled in the same way as other CPU features. Below is
the list of KVM VCPU features and their descriptions.
``kvm-no-adjvtime``
By default kvm-no-adjvtime is disabled. This means that by default
the virtual time adjustment is enabled (vtime is not *not* adjusted).
kvm-no-adjvtime By default kvm-no-adjvtime is disabled. This
means that by default the virtual time
adjustment is enabled (vtime is not *not*
adjusted).
When virtual time adjustment is enabled each time the VM transitions
back to running state the VCPU's virtual counter is updated to
ensure stopped time is not counted. This avoids time jumps
surprising guest OSes and applications, as long as they use the
virtual counter for timekeeping. However it has the side effect of
the virtual and physical counters diverging. All timekeeping based
on the virtual counter will appear to lag behind any timekeeping
that does not subtract VM stopped time. The guest may resynchronize
its virtual counter with other time sources as needed.
When virtual time adjustment is enabled each
time the VM transitions back to running state
the VCPU's virtual counter is updated to ensure
stopped time is not counted. This avoids time
jumps surprising guest OSes and applications,
as long as they use the virtual counter for
timekeeping. However it has the side effect of
the virtual and physical counters diverging.
All timekeeping based on the virtual counter
will appear to lag behind any timekeeping that
does not subtract VM stopped time. The guest
may resynchronize its virtual counter with
other time sources as needed.
Enable kvm-no-adjvtime to disable virtual time adjustment, also
restoring the legacy (pre-5.0) behavior.
Enable kvm-no-adjvtime to disable virtual time
adjustment, also restoring the legacy (pre-5.0)
behavior.
``kvm-steal-time``
Since v5.2, kvm-steal-time is enabled by default when KVM is
enabled, the feature is supported, and the guest is 64-bit.
kvm-steal-time Since v5.2, kvm-steal-time is enabled by
default when KVM is enabled, the feature is
supported, and the guest is 64-bit.
When kvm-steal-time is enabled a 64-bit guest can account for time
its CPUs were not running due to the host not scheduling the
corresponding VCPU threads. The accounting statistics may influence
the guest scheduler behavior and/or be exposed to the guest
userspace.
When kvm-steal-time is enabled a 64-bit guest
can account for time its CPUs were not running
due to the host not scheduling the corresponding
VCPU threads. The accounting statistics may
influence the guest scheduler behavior and/or be
exposed to the guest userspace.
TCG VCPU Features
=================
@@ -210,15 +217,16 @@ TCG VCPU Features
TCG VCPU features are CPU features that are specific to TCG.
Below is the list of TCG VCPU features and their descriptions.
``pauth-impdef``
When ``FEAT_Pauth`` is enabled, either the *impdef* (Implementation
Defined) algorithm is enabled or the *architected* QARMA algorithm
is enabled. By default the impdef algorithm is disabled, and QARMA
is enabled.
pauth-impdef When ``FEAT_Pauth`` is enabled, either the
*impdef* (Implementation Defined) algorithm
is enabled or the *architected* QARMA algorithm
is enabled. By default the impdef algorithm
is disabled, and QARMA is enabled.
The architected QARMA algorithm has good cryptographic properties,
but can be quite slow to emulate. The impdef algorithm used by QEMU
is non-cryptographic but significantly faster.
The architected QARMA algorithm has good
cryptographic properties, but can be quite slow
to emulate. The impdef algorithm used by QEMU
is non-cryptographic but significantly faster.
SVE CPU Properties
==================

View File

@@ -93,4 +93,3 @@ Emulated Devices
devices/virtio-pmem.rst
devices/vhost-user-rng.rst
devices/canokey.rst
devices/igb.rst

View File

@@ -1,71 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0-or-later
.. _igb:
igb
---
igb is a family of Intel's gigabit ethernet controllers. In QEMU, 82576
emulation is implemented in particular. Its datasheet is available at [1]_.
This implementation is expected to be useful to test SR-IOV networking without
requiring physical hardware.
Limitations
===========
This igb implementation was tested with Linux Test Project [2]_ and Windows HLK
[3]_ during the initial development. The command used when testing with LTP is:
.. code-block:: shell
network.sh -6mta
Be aware that this implementation lacks many functionalities available with the
actual hardware, and you may experience various failures if you try to use it
with a different operating system other than Linux and Windows or if you try
functionalities not covered by the tests.
Using igb
=========
Using igb should be nothing different from using another network device. See
:ref:`pcsys_005fnetwork` in general.
However, you may also need to perform additional steps to activate SR-IOV
feature on your guest. For Linux, refer to [4]_.
Developing igb
==============
igb is the successor of e1000e, and e1000e is the successor of e1000 in turn.
As these devices are very similar, if you make a change for igb and the same
change can be applied to e1000e and e1000, please do so.
Please do not forget to run tests before submitting a change. As tests included
in QEMU is very minimal, run some application which is likely to be affected by
the change to confirm it works in an integrated system.
Testing igb
===========
A qtest of the basic functionality is available. Run the below at the build
directory:
.. code-block:: shell
meson test qtest-x86_64/qos-test
ethtool can test register accesses, interrupts, etc. It is automated as an
Avocado test and can be ran with the following command:
.. code:: shell
make check-avocado AVOCADO_TESTS=tests/avocado/igb.py
References
==========
.. [1] https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82576eb-gigabit-ethernet-controller-datasheet.pdf
.. [2] https://github.com/linux-test-project/ltp
.. [3] https://learn.microsoft.com/en-us/windows-hardware/test/hlk/
.. [4] https://docs.kernel.org/PCI/pci-iov-howto.html

View File

@@ -9,8 +9,6 @@ KVM has support for hosting Xen guests, intercepting Xen hypercalls and event
channel (Xen PV interrupt) delivery. This allows guests which expect to be
run under Xen to be hosted in QEMU under Linux/KVM instead.
Using the split irqchip is mandatory for Xen support.
Setup
-----
@@ -19,14 +17,14 @@ accelerator, for example for Xen 4.10:
.. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x4000a,kernel-irqchip=split
|qemu_system| --accel kvm,xen-version=0x4000a
Additionally, virtual APIC support can be advertised to the guest through the
``xen-vapic`` CPU flag:
.. parsed-literal::
|qemu_system| --accel kvm,xen-version=0x4000a,kernel-irqchip=split --cpu host,+xen_vapic
|qemu_system| --accel kvm,xen-version=0x4000a --cpu host,+xen_vapic
When Xen support is enabled, QEMU changes hypervisor identification (CPUID
0x40000000..0x4000000A) to Xen. The KVM identification and features are not
@@ -35,25 +33,11 @@ moves to leaves 0x40000100..0x4000010A.
The Xen platform device is enabled automatically for a Xen guest. This allows
a guest to unplug all emulated devices, in order to use Xen PV block and network
drivers instead. Under Xen, the boot disk is typically available both via IDE
emulation, and as a PV block device. Guest bootloaders typically use IDE to load
the guest kernel, which then unplugs the IDE and continues with the Xen PV block
device.
This configuration can be achieved as follows
.. parsed-literal::
|qemu_system| -M pc --accel kvm,xen-version=0x4000a,kernel-irqchip=split \\
-drive file=${GUEST_IMAGE},if=none,id=disk,file.locking=off -device xen-disk,drive=disk,vdev=xvda \\
-drive file=${GUEST_IMAGE},index=2,media=disk,file.locking=off,if=ide
It is necessary to use the pc machine type, as the q35 machine uses AHCI instead
of legacy IDE, and AHCI disks are not unplugged through the Xen PV unplug
mechanism.
VirtIO devices can also be used; Linux guests may need to be dissuaded from
umplugging them by adding 'xen_emul_unplug=never' on their command line.
drivers instead. Note that until the Xen PV device back ends are enabled to work
with Xen mode in QEMU, that is unlikely to cause significant joy. Linux guests
can be dissuaded from this by adding 'xen_emul_unplug=never' on their command
line, and it can also be noted that AHCI disk controllers are exempt from being
unplugged, as are passthrough VFIO PCI devices.
Properties
----------

View File

@@ -8,6 +8,8 @@ endian options, ``qemu-system-mips``, ``qemu-system-mipsel``
``qemu-system-mips64`` and ``qemu-system-mips64el``. Five different
machine types are emulated:
- A generic ISA PC-like machine \"mips\"
- The MIPS Malta prototype board \"malta\"
- An ACER Pica \"pica61\". This machine needs the 64-bit emulator.
@@ -17,6 +19,18 @@ machine types are emulated:
- A MIPS Magnum R4000 machine \"magnum\". This machine needs the
64-bit emulator.
The generic emulation is supported by Debian 'Etch' and is able to
install Debian into a virtual disk image. The following devices are
emulated:
- A range of MIPS CPUs, default is the 24Kf
- PC style serial port
- PC style IDE disk
- NE2000 network card
The Malta emulation supports the following devices:
- Core board with MIPS 24Kf CPU and Galileo system controller

View File

@@ -24,7 +24,6 @@
#include "qapi/qapi-commands-dump.h"
#include "qapi/qapi-events-dump.h"
#include "qapi/qmp/qerror.h"
#include "qemu/error-report.h"
#include "qemu/main-loop.h"
#include "hw/misc/vmcoreinfo.h"
#include "migration/blocker.h"

View File

@@ -11,7 +11,6 @@
#include "qemu/osdep.h"
#include "sysemu/dump.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "qapi/qmp/qerror.h"
#include "exec/cpu-defs.h"
#include "hw/core/cpu.h"

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +0,0 @@
<?xml version="1.0"?>
<!-- Copyright (C) 2018-2022 Free Software Foundation, Inc.
Copying and distribution of this file, with or without modification,
are permitted in any medium without royalty provided the copyright
notice and this notice are preserved. -->
<!DOCTYPE feature SYSTEM "gdb-target.dtd">
<feature name="org.gnu.gdb.aarch64.pauth">
<reg name="pauth_dmask" bitsize="64"/>
<reg name="pauth_cmask" bitsize="64"/>
<reg name="pauth_dmask_high" bitsize="64"/>
<reg name="pauth_cmask_high" bitsize="64"/>
</feature>

File diff suppressed because it is too large Load Diff

View File

@@ -6,220 +6,14 @@
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#ifndef GDBSTUB_INTERNALS_H
#define GDBSTUB_INTERNALS_H
#ifndef _INTERNALS_H_
#define _INTERNALS_H_
#include "exec/cpu-common.h"
#define MAX_PACKET_LENGTH 4096
/*
* Shared structures and definitions
*/
enum {
GDB_SIGNAL_0 = 0,
GDB_SIGNAL_INT = 2,
GDB_SIGNAL_QUIT = 3,
GDB_SIGNAL_TRAP = 5,
GDB_SIGNAL_ABRT = 6,
GDB_SIGNAL_ALRM = 14,
GDB_SIGNAL_IO = 23,
GDB_SIGNAL_XCPU = 24,
GDB_SIGNAL_UNKNOWN = 143
};
typedef struct GDBProcess {
uint32_t pid;
bool attached;
char target_xml[1024];
} GDBProcess;
enum RSState {
RS_INACTIVE,
RS_IDLE,
RS_GETLINE,
RS_GETLINE_ESC,
RS_GETLINE_RLE,
RS_CHKSUM1,
RS_CHKSUM2,
};
typedef struct GDBState {
bool init; /* have we been initialised? */
CPUState *c_cpu; /* current CPU for step/continue ops */
CPUState *g_cpu; /* current CPU for other ops */
CPUState *query_cpu; /* for q{f|s}ThreadInfo */
enum RSState state; /* parsing state */
char line_buf[MAX_PACKET_LENGTH];
int line_buf_index;
int line_sum; /* running checksum */
int line_csum; /* checksum at the end of the packet */
GByteArray *last_packet;
int signal;
bool multiprocess;
GDBProcess *processes;
int process_num;
GString *str_buf;
GByteArray *mem_buf;
int sstep_flags;
int supported_sstep_flags;
} GDBState;
/* lives in main gdbstub.c */
extern GDBState gdbserver_state;
/*
* Inline utility function, convert from int to hex and back
*/
static inline int fromhex(int v)
{
if (v >= '0' && v <= '9') {
return v - '0';
} else if (v >= 'A' && v <= 'F') {
return v - 'A' + 10;
} else if (v >= 'a' && v <= 'f') {
return v - 'a' + 10;
} else {
return 0;
}
}
static inline int tohex(int v)
{
if (v < 10) {
return v + '0';
} else {
return v - 10 + 'a';
}
}
/*
* Connection helpers for both softmmu and user backends
*/
void gdb_put_strbuf(void);
int gdb_put_packet(const char *buf);
int gdb_put_packet_binary(const char *buf, int len, bool dump);
void gdb_hextomem(GByteArray *mem, const char *buf, int len);
void gdb_memtohex(GString *buf, const uint8_t *mem, int len);
void gdb_memtox(GString *buf, const char *mem, int len);
void gdb_read_byte(uint8_t ch);
/*
* Packet acknowledgement - we handle this slightly differently
* between user and softmmu mode, mainly to deal with the differences
* between the flexible chardev and the direct fd approaches.
*
* We currently don't support a negotiated QStartNoAckMode
*/
/**
* gdb_got_immediate_ack() - check ok to continue
*
* Returns true to continue, false to re-transmit for user only, the
* softmmu stub always returns true.
*/
bool gdb_got_immediate_ack(void);
/* utility helpers */
CPUState *gdb_first_attached_cpu(void);
void gdb_append_thread_id(CPUState *cpu, GString *buf);
int gdb_get_cpu_index(CPUState *cpu);
unsigned int gdb_get_max_cpus(void); /* both */
bool gdb_can_reverse(void); /* softmmu, stub for user */
void gdb_create_default_process(GDBState *s);
/* signal mapping, common for softmmu, specialised for user-mode */
int gdb_signal_to_target(int sig);
int gdb_target_signal_to_gdb(int sig);
int gdb_get_char(void); /* user only */
/**
* gdb_continue() - handle continue in mode specific way.
*/
void gdb_continue(void);
/**
* gdb_continue_partial() - handle partial continue in mode specific way.
*/
int gdb_continue_partial(char *newstates);
/*
* Helpers with separate softmmu and user implementations
*/
void gdb_put_buffer(const uint8_t *buf, int len);
/*
* Command handlers - either specialised or softmmu or user only
*/
void gdb_init_gdbserver_state(void);
typedef enum GDBThreadIdKind {
GDB_ONE_THREAD = 0,
GDB_ALL_THREADS, /* One process, all threads */
GDB_ALL_PROCESSES,
GDB_READ_THREAD_ERR
} GDBThreadIdKind;
typedef union GdbCmdVariant {
const char *data;
uint8_t opcode;
unsigned long val_ul;
unsigned long long val_ull;
struct {
GDBThreadIdKind kind;
uint32_t pid;
uint32_t tid;
} thread_id;
} GdbCmdVariant;
#define get_param(p, i) (&g_array_index(p, GdbCmdVariant, i))
void gdb_handle_query_rcmd(GArray *params, void *user_ctx); /* softmmu */
void gdb_handle_query_offsets(GArray *params, void *user_ctx); /* user */
void gdb_handle_query_xfer_auxv(GArray *params, void *user_ctx); /*user */
void gdb_handle_query_attached(GArray *params, void *user_ctx); /* both */
/* softmmu only */
void gdb_handle_query_qemu_phy_mem_mode(GArray *params, void *user_ctx);
void gdb_handle_set_qemu_phy_mem_mode(GArray *params, void *user_ctx);
/* sycall handling */
void gdb_handle_file_io(GArray *params, void *user_ctx);
bool gdb_handled_syscall(void);
void gdb_disable_syscalls(void);
void gdb_syscall_reset(void);
/* user/softmmu specific syscall handling */
void gdb_syscall_handling(const char *syscall_packet);
/*
* Break/Watch point support - there is an implementation for softmmu
* and user mode.
*/
bool gdb_supports_guest_debug(void);
int gdb_breakpoint_insert(CPUState *cs, int type, vaddr addr, vaddr len);
int gdb_breakpoint_remove(CPUState *cs, int type, vaddr addr, vaddr len);
void gdb_breakpoint_remove_all(CPUState *cs);
/**
* gdb_target_memory_rw_debug() - handle debug access to memory
* @cs: CPUState
* @addr: nominal address, could be an entire physical address
* @buf: data
* @len: length of access
* @is_write: is it a write operation
*
* This function is specialised depending on the mode we are running
* in. For softmmu guests we can switch the interpretation of the
* address to a physical address.
*/
int gdb_target_memory_rw_debug(CPUState *cs, hwaddr addr,
uint8_t *buf, int len, bool is_write);
#endif /* GDBSTUB_INTERNALS_H */
#endif /* _INTERNALS_H_ */

View File

@@ -4,34 +4,6 @@
# types such as hwaddr.
#
# We need to build the core gdb code via a library to be able to tweak
# cflags so:
gdb_user_ss = ss.source_set()
gdb_softmmu_ss = ss.source_set()
# We build two versions of gdbstub, one for each mode
gdb_user_ss.add(files('gdbstub.c', 'user.c'))
gdb_softmmu_ss.add(files('gdbstub.c', 'softmmu.c'))
gdb_user_ss = gdb_user_ss.apply(config_host, strict: false)
gdb_softmmu_ss = gdb_softmmu_ss.apply(config_host, strict: false)
libgdb_user = static_library('gdb_user',
gdb_user_ss.sources() + genh,
name_suffix: 'fa',
c_args: '-DCONFIG_USER_ONLY')
libgdb_softmmu = static_library('gdb_softmmu',
gdb_softmmu_ss.sources() + genh,
name_suffix: 'fa')
gdb_user = declare_dependency(link_whole: libgdb_user)
user_ss.add(gdb_user)
gdb_softmmu = declare_dependency(link_whole: libgdb_softmmu)
softmmu_ss.add(gdb_softmmu)
common_ss.add(files('syscalls.c'))
# The user-target is specialised by the guest
specific_ss.add(when: 'CONFIG_USER_ONLY', if_true: files('user-target.c'))
specific_ss.add(files('gdbstub.c'))
softmmu_ss.add(files('softmmu.c'))
user_ss.add(files('user.c'))

View File

@@ -4,617 +4,16 @@
* Debug integration depends on support from the individual
* accelerators so most of this involves calling the ops helpers.
*
* Copyright (c) 2003-2005 Fabrice Bellard
* Copyright (c) 2022 Linaro Ltd
*
* SPDX-License-Identifier: LGPL-2.0+
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
#include "qapi/error.h"
#include "qemu/error-report.h"
#include "qemu/cutils.h"
#include "exec/gdbstub.h"
#include "gdbstub/syscalls.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "sysemu/cpus.h"
#include "sysemu/runstate.h"
#include "sysemu/replay.h"
#include "hw/core/cpu.h"
#include "hw/cpu/cluster.h"
#include "hw/boards.h"
#include "chardev/char.h"
#include "chardev/char-fe.h"
#include "monitor/monitor.h"
#include "trace.h"
#include "internals.h"
/* System emulation specific state */
typedef struct {
CharBackend chr;
Chardev *mon_chr;
} GDBSystemState;
GDBSystemState gdbserver_system_state;
static void reset_gdbserver_state(void)
{
g_free(gdbserver_state.processes);
gdbserver_state.processes = NULL;
gdbserver_state.process_num = 0;
}
/*
* Return the GDB index for a given vCPU state.
*
* In system mode GDB numbers CPUs from 1 as 0 is reserved as an "any
* cpu" index.
*/
int gdb_get_cpu_index(CPUState *cpu)
{
return cpu->cpu_index + 1;
}
/*
* We check the status of the last message in the chardev receive code
*/
bool gdb_got_immediate_ack(void)
{
return true;
}
/*
* GDB Connection management. For system emulation we do all of this
* via our existing Chardev infrastructure which allows us to support
* network and unix sockets.
*/
void gdb_put_buffer(const uint8_t *buf, int len)
{
/*
* XXX this blocks entire thread. Rewrite to use
* qemu_chr_fe_write and background I/O callbacks
*/
qemu_chr_fe_write_all(&gdbserver_system_state.chr, buf, len);
}
static void gdb_chr_event(void *opaque, QEMUChrEvent event)
{
int i;
GDBState *s = (GDBState *) opaque;
switch (event) {
case CHR_EVENT_OPENED:
/* Start with first process attached, others detached */
for (i = 0; i < s->process_num; i++) {
s->processes[i].attached = !i;
}
s->c_cpu = gdb_first_attached_cpu();
s->g_cpu = s->c_cpu;
vm_stop(RUN_STATE_PAUSED);
replay_gdb_attached();
gdb_has_xml = false;
break;
default:
break;
}
}
/*
* In softmmu mode we stop the VM and wait to send the syscall packet
* until notification that the CPU has stopped. This must be done
* because if the packet is sent now the reply from the syscall
* request could be received while the CPU is still in the running
* state, which can cause packets to be dropped and state transition
* 'T' packets to be sent while the syscall is still being processed.
*/
void gdb_syscall_handling(const char *syscall_packet)
{
vm_stop(RUN_STATE_DEBUG);
qemu_cpu_kick(gdbserver_state.c_cpu);
}
static void gdb_vm_state_change(void *opaque, bool running, RunState state)
{
CPUState *cpu = gdbserver_state.c_cpu;
g_autoptr(GString) buf = g_string_new(NULL);
g_autoptr(GString) tid = g_string_new(NULL);
const char *type;
int ret;
if (running || gdbserver_state.state == RS_INACTIVE) {
return;
}
/* Is there a GDB syscall waiting to be sent? */
if (gdb_handled_syscall()) {
return;
}
if (cpu == NULL) {
/* No process attached */
return;
}
gdb_append_thread_id(cpu, tid);
switch (state) {
case RUN_STATE_DEBUG:
if (cpu->watchpoint_hit) {
switch (cpu->watchpoint_hit->flags & BP_MEM_ACCESS) {
case BP_MEM_READ:
type = "r";
break;
case BP_MEM_ACCESS:
type = "a";
break;
default:
type = "";
break;
}
trace_gdbstub_hit_watchpoint(type,
gdb_get_cpu_index(cpu),
cpu->watchpoint_hit->vaddr);
g_string_printf(buf, "T%02xthread:%s;%swatch:%" VADDR_PRIx ";",
GDB_SIGNAL_TRAP, tid->str, type,
cpu->watchpoint_hit->vaddr);
cpu->watchpoint_hit = NULL;
goto send_packet;
} else {
trace_gdbstub_hit_break();
}
tb_flush(cpu);
ret = GDB_SIGNAL_TRAP;
break;
case RUN_STATE_PAUSED:
trace_gdbstub_hit_paused();
ret = GDB_SIGNAL_INT;
break;
case RUN_STATE_SHUTDOWN:
trace_gdbstub_hit_shutdown();
ret = GDB_SIGNAL_QUIT;
break;
case RUN_STATE_IO_ERROR:
trace_gdbstub_hit_io_error();
ret = GDB_SIGNAL_IO;
break;
case RUN_STATE_WATCHDOG:
trace_gdbstub_hit_watchdog();
ret = GDB_SIGNAL_ALRM;
break;
case RUN_STATE_INTERNAL_ERROR:
trace_gdbstub_hit_internal_error();
ret = GDB_SIGNAL_ABRT;
break;
case RUN_STATE_SAVE_VM:
case RUN_STATE_RESTORE_VM:
return;
case RUN_STATE_FINISH_MIGRATE:
ret = GDB_SIGNAL_XCPU;
break;
default:
trace_gdbstub_hit_unknown(state);
ret = GDB_SIGNAL_UNKNOWN;
break;
}
gdb_set_stop_cpu(cpu);
g_string_printf(buf, "T%02xthread:%s;", ret, tid->str);
send_packet:
gdb_put_packet(buf->str);
/* disable single step if it was enabled */
cpu_single_step(cpu, 0);
}
#ifndef _WIN32
static void gdb_sigterm_handler(int signal)
{
if (runstate_is_running()) {
vm_stop(RUN_STATE_PAUSED);
}
}
#endif
static int gdb_monitor_write(Chardev *chr, const uint8_t *buf, int len)
{
g_autoptr(GString) hex_buf = g_string_new("O");
gdb_memtohex(hex_buf, buf, len);
gdb_put_packet(hex_buf->str);
return len;
}
static void gdb_monitor_open(Chardev *chr, ChardevBackend *backend,
bool *be_opened, Error **errp)
{
*be_opened = false;
}
static void char_gdb_class_init(ObjectClass *oc, void *data)
{
ChardevClass *cc = CHARDEV_CLASS(oc);
cc->internal = true;
cc->open = gdb_monitor_open;
cc->chr_write = gdb_monitor_write;
}
#define TYPE_CHARDEV_GDB "chardev-gdb"
static const TypeInfo char_gdb_type_info = {
.name = TYPE_CHARDEV_GDB,
.parent = TYPE_CHARDEV,
.class_init = char_gdb_class_init,
};
static int gdb_chr_can_receive(void *opaque)
{
/*
* We can handle an arbitrarily large amount of data.
* Pick the maximum packet size, which is as good as anything.
*/
return MAX_PACKET_LENGTH;
}
static void gdb_chr_receive(void *opaque, const uint8_t *buf, int size)
{
int i;
for (i = 0; i < size; i++) {
gdb_read_byte(buf[i]);
}
}
static int find_cpu_clusters(Object *child, void *opaque)
{
if (object_dynamic_cast(child, TYPE_CPU_CLUSTER)) {
GDBState *s = (GDBState *) opaque;
CPUClusterState *cluster = CPU_CLUSTER(child);
GDBProcess *process;
s->processes = g_renew(GDBProcess, s->processes, ++s->process_num);
process = &s->processes[s->process_num - 1];
/*
* GDB process IDs -1 and 0 are reserved. To avoid subtle errors at
* runtime, we enforce here that the machine does not use a cluster ID
* that would lead to PID 0.
*/
assert(cluster->cluster_id != UINT32_MAX);
process->pid = cluster->cluster_id + 1;
process->attached = false;
process->target_xml[0] = '\0';
return 0;
}
return object_child_foreach(child, find_cpu_clusters, opaque);
}
static int pid_order(const void *a, const void *b)
{
GDBProcess *pa = (GDBProcess *) a;
GDBProcess *pb = (GDBProcess *) b;
if (pa->pid < pb->pid) {
return -1;
} else if (pa->pid > pb->pid) {
return 1;
} else {
return 0;
}
}
static void create_processes(GDBState *s)
{
object_child_foreach(object_get_root(), find_cpu_clusters, s);
if (gdbserver_state.processes) {
/* Sort by PID */
qsort(gdbserver_state.processes,
gdbserver_state.process_num,
sizeof(gdbserver_state.processes[0]),
pid_order);
}
gdb_create_default_process(s);
}
int gdbserver_start(const char *device)
{
trace_gdbstub_op_start(device);
char gdbstub_device_name[128];
Chardev *chr = NULL;
Chardev *mon_chr;
if (!first_cpu) {
error_report("gdbstub: meaningless to attach gdb to a "
"machine without any CPU.");
return -1;
}
if (!gdb_supports_guest_debug()) {
error_report("gdbstub: current accelerator doesn't "
"support guest debugging");
return -1;
}
if (!device) {
return -1;
}
if (strcmp(device, "none") != 0) {
if (strstart(device, "tcp:", NULL)) {
/* enforce required TCP attributes */
snprintf(gdbstub_device_name, sizeof(gdbstub_device_name),
"%s,wait=off,nodelay=on,server=on", device);
device = gdbstub_device_name;
}
#ifndef _WIN32
else if (strcmp(device, "stdio") == 0) {
struct sigaction act;
memset(&act, 0, sizeof(act));
act.sa_handler = gdb_sigterm_handler;
sigaction(SIGINT, &act, NULL);
}
#endif
/*
* FIXME: it's a bit weird to allow using a mux chardev here
* and implicitly setup a monitor. We may want to break this.
*/
chr = qemu_chr_new_noreplay("gdb", device, true, NULL);
if (!chr) {
return -1;
}
}
if (!gdbserver_state.init) {
gdb_init_gdbserver_state();
qemu_add_vm_change_state_handler(gdb_vm_state_change, NULL);
/* Initialize a monitor terminal for gdb */
mon_chr = qemu_chardev_new(NULL, TYPE_CHARDEV_GDB,
NULL, NULL, &error_abort);
monitor_init_hmp(mon_chr, false, &error_abort);
} else {
qemu_chr_fe_deinit(&gdbserver_system_state.chr, true);
mon_chr = gdbserver_system_state.mon_chr;
reset_gdbserver_state();
}
create_processes(&gdbserver_state);
if (chr) {
qemu_chr_fe_init(&gdbserver_system_state.chr, chr, &error_abort);
qemu_chr_fe_set_handlers(&gdbserver_system_state.chr,
gdb_chr_can_receive,
gdb_chr_receive, gdb_chr_event,
NULL, &gdbserver_state, NULL, true);
}
gdbserver_state.state = chr ? RS_IDLE : RS_INACTIVE;
gdbserver_system_state.mon_chr = mon_chr;
gdb_syscall_reset();
return 0;
}
static void register_types(void)
{
type_register_static(&char_gdb_type_info);
}
type_init(register_types);
/* Tell the remote gdb that the process has exited. */
void gdb_exit(int code)
{
char buf[4];
if (!gdbserver_state.init) {
return;
}
trace_gdbstub_op_exiting((uint8_t)code);
snprintf(buf, sizeof(buf), "W%02x", (uint8_t)code);
gdb_put_packet(buf);
qemu_chr_fe_deinit(&gdbserver_system_state.chr, true);
}
/*
* Memory access
*/
static int phy_memory_mode;
int gdb_target_memory_rw_debug(CPUState *cpu, hwaddr addr,
uint8_t *buf, int len, bool is_write)
{
CPUClass *cc;
if (phy_memory_mode) {
if (is_write) {
cpu_physical_memory_write(addr, buf, len);
} else {
cpu_physical_memory_read(addr, buf, len);
}
return 0;
}
cc = CPU_GET_CLASS(cpu);
if (cc->memory_rw_debug) {
return cc->memory_rw_debug(cpu, addr, buf, len, is_write);
}
return cpu_memory_rw_debug(cpu, addr, buf, len, is_write);
}
/*
* cpu helpers
*/
unsigned int gdb_get_max_cpus(void)
{
MachineState *ms = MACHINE(qdev_get_machine());
return ms->smp.max_cpus;
}
bool gdb_can_reverse(void)
{
return replay_mode == REPLAY_MODE_PLAY;
}
/*
* Softmmu specific command helpers
*/
void gdb_handle_query_qemu_phy_mem_mode(GArray *params,
void *user_ctx)
{
g_string_printf(gdbserver_state.str_buf, "%d", phy_memory_mode);
gdb_put_strbuf();
}
void gdb_handle_set_qemu_phy_mem_mode(GArray *params, void *user_ctx)
{
if (!params->len) {
gdb_put_packet("E22");
return;
}
if (!get_param(params, 0)->val_ul) {
phy_memory_mode = 0;
} else {
phy_memory_mode = 1;
}
gdb_put_packet("OK");
}
void gdb_handle_query_rcmd(GArray *params, void *user_ctx)
{
const guint8 zero = 0;
int len;
if (!params->len) {
gdb_put_packet("E22");
return;
}
len = strlen(get_param(params, 0)->data);
if (len % 2) {
gdb_put_packet("E01");
return;
}
g_assert(gdbserver_state.mem_buf->len == 0);
len = len / 2;
gdb_hextomem(gdbserver_state.mem_buf, get_param(params, 0)->data, len);
g_byte_array_append(gdbserver_state.mem_buf, &zero, 1);
qemu_chr_be_write(gdbserver_system_state.mon_chr,
gdbserver_state.mem_buf->data,
gdbserver_state.mem_buf->len);
gdb_put_packet("OK");
}
/*
* Execution state helpers
*/
void gdb_handle_query_attached(GArray *params, void *user_ctx)
{
gdb_put_packet("1");
}
void gdb_continue(void)
{
if (!runstate_needs_reset()) {
trace_gdbstub_op_continue();
vm_start();
}
}
/*
* Resume execution, per CPU actions.
*/
int gdb_continue_partial(char *newstates)
{
CPUState *cpu;
int res = 0;
int flag = 0;
if (!runstate_needs_reset()) {
bool step_requested = false;
CPU_FOREACH(cpu) {
if (newstates[cpu->cpu_index] == 's') {
step_requested = true;
break;
}
}
if (vm_prepare_start(step_requested)) {
return 0;
}
CPU_FOREACH(cpu) {
switch (newstates[cpu->cpu_index]) {
case 0:
case 1:
break; /* nothing to do here */
case 's':
trace_gdbstub_op_stepping(cpu->cpu_index);
cpu_single_step(cpu, gdbserver_state.sstep_flags);
cpu_resume(cpu);
flag = 1;
break;
case 'c':
trace_gdbstub_op_continue_cpu(cpu->cpu_index);
cpu_resume(cpu);
flag = 1;
break;
default:
res = -1;
break;
}
}
}
if (flag) {
qemu_clock_enable(QEMU_CLOCK_VIRTUAL, true);
}
return res;
}
/*
* Signal Handling - in system mode we only need SIGINT and SIGTRAP; other
* signals are not yet supported.
*/
enum {
TARGET_SIGINT = 2,
TARGET_SIGTRAP = 5
};
int gdb_signal_to_target(int sig)
{
switch (sig) {
case 2:
return TARGET_SIGINT;
case 5:
return TARGET_SIGTRAP;
default:
return -1;
}
}
/*
* Break/Watch point helpers
*/
bool gdb_supports_guest_debug(void)
{
const AccelOpsClass *ops = cpus_get_accel();

View File

@@ -1,205 +0,0 @@
/*
* GDB Syscall Handling
*
* GDB can execute syscalls on the guests behalf, currently used by
* the various semihosting extensions.
*
* Copyright (c) 2003-2005 Fabrice Bellard
* Copyright (c) 2023 Linaro Ltd
*
* SPDX-License-Identifier: LGPL-2.0+
*/
#include "qemu/osdep.h"
#include "qemu/error-report.h"
#include "semihosting/semihost.h"
#include "sysemu/runstate.h"
#include "gdbstub/user.h"
#include "gdbstub/syscalls.h"
#include "trace.h"
#include "internals.h"
/* Syscall specific state */
typedef struct {
char syscall_buf[256];
gdb_syscall_complete_cb current_syscall_cb;
} GDBSyscallState;
static GDBSyscallState gdbserver_syscall_state;
/*
* Return true if there is a GDB currently connected to the stub
* and attached to a CPU
*/
static bool gdb_attached(void)
{
return gdbserver_state.init && gdbserver_state.c_cpu;
}
static enum {
GDB_SYS_UNKNOWN,
GDB_SYS_ENABLED,
GDB_SYS_DISABLED,
} gdb_syscall_mode;
/* Decide if either remote gdb syscalls or native file IO should be used. */
int use_gdb_syscalls(void)
{
SemihostingTarget target = semihosting_get_target();
if (target == SEMIHOSTING_TARGET_NATIVE) {
/* -semihosting-config target=native */
return false;
} else if (target == SEMIHOSTING_TARGET_GDB) {
/* -semihosting-config target=gdb */
return true;
}
/* -semihosting-config target=auto */
/* On the first call check if gdb is connected and remember. */
if (gdb_syscall_mode == GDB_SYS_UNKNOWN) {
gdb_syscall_mode = gdb_attached() ? GDB_SYS_ENABLED : GDB_SYS_DISABLED;
}
return gdb_syscall_mode == GDB_SYS_ENABLED;
}
/* called when the stub detaches */
void gdb_disable_syscalls(void)
{
gdb_syscall_mode = GDB_SYS_DISABLED;
}
void gdb_syscall_reset(void)
{
gdbserver_syscall_state.current_syscall_cb = NULL;
}
bool gdb_handled_syscall(void)
{
if (gdbserver_syscall_state.current_syscall_cb) {
gdb_put_packet(gdbserver_syscall_state.syscall_buf);
return true;
}
return false;
}
/*
* Send a gdb syscall request.
* This accepts limited printf-style format specifiers, specifically:
* %x - target_ulong argument printed in hex.
* %lx - 64-bit argument printed in hex.
* %s - string pointer (target_ulong) and length (int) pair.
*/
void gdb_do_syscall(gdb_syscall_complete_cb cb, const char *fmt, ...)
{
char *p, *p_end;
va_list va;
if (!gdb_attached()) {
return;
}
gdbserver_syscall_state.current_syscall_cb = cb;
va_start(va, fmt);
p = gdbserver_syscall_state.syscall_buf;
p_end = p + sizeof(gdbserver_syscall_state.syscall_buf);
*(p++) = 'F';
while (*fmt) {
if (*fmt == '%') {
uint64_t i64;
uint32_t i32;
fmt++;
switch (*fmt++) {
case 'x':
i32 = va_arg(va, uint32_t);
p += snprintf(p, p_end - p, "%" PRIx32, i32);
break;
case 'l':
if (*(fmt++) != 'x') {
goto bad_format;
}
i64 = va_arg(va, uint64_t);
p += snprintf(p, p_end - p, "%" PRIx64, i64);
break;
case 's':
i64 = va_arg(va, uint64_t);
i32 = va_arg(va, uint32_t);
p += snprintf(p, p_end - p, "%" PRIx64 "/%x" PRIx32, i64, i32);
break;
default:
bad_format:
error_report("gdbstub: Bad syscall format string '%s'",
fmt - 1);
break;
}
} else {
*(p++) = *(fmt++);
}
}
*p = 0;
va_end(va);
gdb_syscall_handling(gdbserver_syscall_state.syscall_buf);
}
/*
* GDB Command Handlers
*/
void gdb_handle_file_io(GArray *params, void *user_ctx)
{
if (params->len >= 1 && gdbserver_syscall_state.current_syscall_cb) {
uint64_t ret;
int err;
ret = get_param(params, 0)->val_ull;
if (params->len >= 2) {
err = get_param(params, 1)->val_ull;
} else {
err = 0;
}
/* Convert GDB error numbers back to host error numbers. */
#define E(X) case GDB_E##X: err = E##X; break
switch (err) {
case 0:
break;
E(PERM);
E(NOENT);
E(INTR);
E(BADF);
E(ACCES);
E(FAULT);
E(BUSY);
E(EXIST);
E(NODEV);
E(NOTDIR);
E(ISDIR);
E(INVAL);
E(NFILE);
E(MFILE);
E(FBIG);
E(NOSPC);
E(SPIPE);
E(ROFS);
E(NAMETOOLONG);
default:
err = EINVAL;
break;
}
#undef E
gdbserver_syscall_state.current_syscall_cb(gdbserver_state.c_cpu,
ret, err);
gdbserver_syscall_state.current_syscall_cb = NULL;
}
if (params->len >= 3 && get_param(params, 2)->opcode == (uint8_t)'C') {
gdb_put_packet("T02");
return;
}
gdb_continue();
}

View File

@@ -7,6 +7,7 @@ gdbstub_op_continue(void) "Continuing all CPUs"
gdbstub_op_continue_cpu(int cpu_index) "Continuing CPU %d"
gdbstub_op_stepping(int cpu_index) "Stepping CPU %d"
gdbstub_op_extra_info(const char *info) "Thread extra info: %s"
gdbstub_hit_watchpoint(const char *type, int cpu_gdb_index, uint64_t vaddr) "Watchpoint hit, type=\"%s\" cpu=%d, vaddr=0x%" PRIx64 ""
gdbstub_hit_internal_error(void) "RUN_STATE_INTERNAL_ERROR"
gdbstub_hit_break(void) "RUN_STATE_DEBUG"
gdbstub_hit_paused(void) "RUN_STATE_PAUSED"
@@ -26,6 +27,3 @@ gdbstub_err_invalid_repeat(uint8_t ch) "got invalid RLE count: 0x%02x"
gdbstub_err_invalid_rle(void) "got invalid RLE sequence"
gdbstub_err_checksum_invalid(uint8_t ch) "got invalid command checksum digit: 0x%02x"
gdbstub_err_checksum_incorrect(uint8_t expected, uint8_t got) "got command packet with incorrect checksum, expected=0x%02x, received=0x%02x"
# softmmu.c
gdbstub_hit_watchpoint(const char *type, int cpu_gdb_index, uint64_t vaddr) "Watchpoint hit, type=\"%s\" cpu=%d, vaddr=0x%" PRIx64 ""

View File

@@ -1,283 +0,0 @@
/*
* Target specific user-mode handling
*
* Copyright (c) 2003-2005 Fabrice Bellard
* Copyright (c) 2022 Linaro Ltd
*
* SPDX-License-Identifier: LGPL-2.0+
*/
#include "qemu/osdep.h"
#include "exec/gdbstub.h"
#include "qemu.h"
#include "internals.h"
/*
* Map target signal numbers to GDB protocol signal numbers and vice
* versa. For user emulation's currently supported systems, we can
* assume most signals are defined.
*/
static int gdb_signal_table[] = {
0,
TARGET_SIGHUP,
TARGET_SIGINT,
TARGET_SIGQUIT,
TARGET_SIGILL,
TARGET_SIGTRAP,
TARGET_SIGABRT,
-1, /* SIGEMT */
TARGET_SIGFPE,
TARGET_SIGKILL,
TARGET_SIGBUS,
TARGET_SIGSEGV,
TARGET_SIGSYS,
TARGET_SIGPIPE,
TARGET_SIGALRM,
TARGET_SIGTERM,
TARGET_SIGURG,
TARGET_SIGSTOP,
TARGET_SIGTSTP,
TARGET_SIGCONT,
TARGET_SIGCHLD,
TARGET_SIGTTIN,
TARGET_SIGTTOU,
TARGET_SIGIO,
TARGET_SIGXCPU,
TARGET_SIGXFSZ,
TARGET_SIGVTALRM,
TARGET_SIGPROF,
TARGET_SIGWINCH,
-1, /* SIGLOST */
TARGET_SIGUSR1,
TARGET_SIGUSR2,
#ifdef TARGET_SIGPWR
TARGET_SIGPWR,
#else
-1,
#endif
-1, /* SIGPOLL */
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
-1,
#ifdef __SIGRTMIN
__SIGRTMIN + 1,
__SIGRTMIN + 2,
__SIGRTMIN + 3,
__SIGRTMIN + 4,
__SIGRTMIN + 5,
__SIGRTMIN + 6,
__SIGRTMIN + 7,
__SIGRTMIN + 8,
__SIGRTMIN + 9,
__SIGRTMIN + 10,
__SIGRTMIN + 11,
__SIGRTMIN + 12,
__SIGRTMIN + 13,
__SIGRTMIN + 14,
__SIGRTMIN + 15,
__SIGRTMIN + 16,
__SIGRTMIN + 17,
__SIGRTMIN + 18,
__SIGRTMIN + 19,
__SIGRTMIN + 20,
__SIGRTMIN + 21,
__SIGRTMIN + 22,
__SIGRTMIN + 23,
__SIGRTMIN + 24,
__SIGRTMIN + 25,
__SIGRTMIN + 26,
__SIGRTMIN + 27,
__SIGRTMIN + 28,
__SIGRTMIN + 29,
__SIGRTMIN + 30,
__SIGRTMIN + 31,
-1, /* SIGCANCEL */
__SIGRTMIN,
__SIGRTMIN + 32,
__SIGRTMIN + 33,
__SIGRTMIN + 34,
__SIGRTMIN + 35,
__SIGRTMIN + 36,
__SIGRTMIN + 37,
__SIGRTMIN + 38,
__SIGRTMIN + 39,
__SIGRTMIN + 40,
__SIGRTMIN + 41,
__SIGRTMIN + 42,
__SIGRTMIN + 43,
__SIGRTMIN + 44,
__SIGRTMIN + 45,
__SIGRTMIN + 46,
__SIGRTMIN + 47,
__SIGRTMIN + 48,
__SIGRTMIN + 49,
__SIGRTMIN + 50,
__SIGRTMIN + 51,
__SIGRTMIN + 52,
__SIGRTMIN + 53,
__SIGRTMIN + 54,
__SIGRTMIN + 55,
__SIGRTMIN + 56,
__SIGRTMIN + 57,
__SIGRTMIN + 58,
__SIGRTMIN + 59,
__SIGRTMIN + 60,
__SIGRTMIN + 61,
__SIGRTMIN + 62,
__SIGRTMIN + 63,
__SIGRTMIN + 64,
__SIGRTMIN + 65,
__SIGRTMIN + 66,
__SIGRTMIN + 67,
__SIGRTMIN + 68,
__SIGRTMIN + 69,
__SIGRTMIN + 70,
__SIGRTMIN + 71,
__SIGRTMIN + 72,
__SIGRTMIN + 73,
__SIGRTMIN + 74,
__SIGRTMIN + 75,
__SIGRTMIN + 76,
__SIGRTMIN + 77,
__SIGRTMIN + 78,
__SIGRTMIN + 79,
__SIGRTMIN + 80,
__SIGRTMIN + 81,
__SIGRTMIN + 82,
__SIGRTMIN + 83,
__SIGRTMIN + 84,
__SIGRTMIN + 85,
__SIGRTMIN + 86,
__SIGRTMIN + 87,
__SIGRTMIN + 88,
__SIGRTMIN + 89,
__SIGRTMIN + 90,
__SIGRTMIN + 91,
__SIGRTMIN + 92,
__SIGRTMIN + 93,
__SIGRTMIN + 94,
__SIGRTMIN + 95,
-1, /* SIGINFO */
-1, /* UNKNOWN */
-1, /* DEFAULT */
-1,
-1,
-1,
-1,
-1,
-1
#endif
};
int gdb_signal_to_target(int sig)
{
if (sig < ARRAY_SIZE(gdb_signal_table)) {
return gdb_signal_table[sig];
} else {
return -1;
}
}
int gdb_target_signal_to_gdb(int sig)
{
int i;
for (i = 0; i < ARRAY_SIZE(gdb_signal_table); i++) {
if (gdb_signal_table[i] == sig) {
return i;
}
}
return GDB_SIGNAL_UNKNOWN;
}
int gdb_get_cpu_index(CPUState *cpu)
{
TaskState *ts = (TaskState *) cpu->opaque;
return ts ? ts->ts_tid : -1;
}
/*
* User-mode specific command helpers
*/
void gdb_handle_query_offsets(GArray *params, void *user_ctx)
{
TaskState *ts;
ts = gdbserver_state.c_cpu->opaque;
g_string_printf(gdbserver_state.str_buf,
"Text=" TARGET_ABI_FMT_lx
";Data=" TARGET_ABI_FMT_lx
";Bss=" TARGET_ABI_FMT_lx,
ts->info->code_offset,
ts->info->data_offset,
ts->info->data_offset);
gdb_put_strbuf();
}
#if defined(CONFIG_LINUX)
/* Partial user only duplicate of helper in gdbstub.c */
static inline int target_memory_rw_debug(CPUState *cpu, target_ulong addr,
uint8_t *buf, int len, bool is_write)
{
CPUClass *cc;
cc = CPU_GET_CLASS(cpu);
if (cc->memory_rw_debug) {
return cc->memory_rw_debug(cpu, addr, buf, len, is_write);
}
return cpu_memory_rw_debug(cpu, addr, buf, len, is_write);
}
void gdb_handle_query_xfer_auxv(GArray *params, void *user_ctx)
{
TaskState *ts;
unsigned long offset, len, saved_auxv, auxv_len;
if (params->len < 2) {
gdb_put_packet("E22");
return;
}
offset = get_param(params, 0)->val_ul;
len = get_param(params, 1)->val_ul;
ts = gdbserver_state.c_cpu->opaque;
saved_auxv = ts->info->saved_auxv;
auxv_len = ts->info->auxv_len;
if (offset >= auxv_len) {
gdb_put_packet("E00");
return;
}
if (len > (MAX_PACKET_LENGTH - 5) / 2) {
len = (MAX_PACKET_LENGTH - 5) / 2;
}
if (len < auxv_len - offset) {
g_string_assign(gdbserver_state.str_buf, "m");
} else {
g_string_assign(gdbserver_state.str_buf, "l");
len = auxv_len - offset;
}
g_byte_array_set_size(gdbserver_state.mem_buf, len);
if (target_memory_rw_debug(gdbserver_state.g_cpu, saved_auxv + offset,
gdbserver_state.mem_buf->data, len, false)) {
gdb_put_packet("E14");
return;
}
gdb_memtox(gdbserver_state.str_buf,
(const char *)gdbserver_state.mem_buf->data, len);
gdb_put_packet_binary(gdbserver_state.str_buf->str,
gdbserver_state.str_buf->len, true);
}
#endif

View File

@@ -3,423 +3,16 @@
*
* We know for user-mode we are using TCG so we can call stuff directly.
*
* Copyright (c) 2003-2005 Fabrice Bellard
* Copyright (c) 2022 Linaro Ltd
*
* SPDX-License-Identifier: LGPL-2.0+
* SPDX-License-Identifier: GPL-2.0-or-later
*/
#include "qemu/osdep.h"
#include "qemu/cutils.h"
#include "qemu/sockets.h"
#include "exec/hwaddr.h"
#include "exec/tb-flush.h"
#include "exec/gdbstub.h"
#include "gdbstub/syscalls.h"
#include "gdbstub/user.h"
#include "hw/core/cpu.h"
#include "trace.h"
#include "internals.h"
/* User-mode specific state */
typedef struct {
int fd;
char *socket_path;
int running_state;
} GDBUserState;
static GDBUserState gdbserver_user_state;
int gdb_get_char(void)
{
uint8_t ch;
int ret;
for (;;) {
ret = recv(gdbserver_user_state.fd, &ch, 1, 0);
if (ret < 0) {
if (errno == ECONNRESET) {
gdbserver_user_state.fd = -1;
}
if (errno != EINTR) {
return -1;
}
} else if (ret == 0) {
close(gdbserver_user_state.fd);
gdbserver_user_state.fd = -1;
return -1;
} else {
break;
}
}
return ch;
}
bool gdb_got_immediate_ack(void)
{
int i;
i = gdb_get_char();
if (i < 0) {
/* no response, continue anyway */
return true;
}
if (i == '+') {
/* received correctly, continue */
return true;
}
/* anything else, including '-' then try again */
return false;
}
void gdb_put_buffer(const uint8_t *buf, int len)
{
int ret;
while (len > 0) {
ret = send(gdbserver_user_state.fd, buf, len, 0);
if (ret < 0) {
if (errno != EINTR) {
return;
}
} else {
buf += ret;
len -= ret;
}
}
}
/* Tell the remote gdb that the process has exited. */
void gdb_exit(int code)
{
char buf[4];
if (!gdbserver_state.init) {
return;
}
if (gdbserver_user_state.socket_path) {
unlink(gdbserver_user_state.socket_path);
}
if (gdbserver_user_state.fd < 0) {
return;
}
trace_gdbstub_op_exiting((uint8_t)code);
snprintf(buf, sizeof(buf), "W%02x", (uint8_t)code);
gdb_put_packet(buf);
}
int gdb_handlesig(CPUState *cpu, int sig)
{
char buf[256];
int n;
if (!gdbserver_state.init || gdbserver_user_state.fd < 0) {
return sig;
}
/* disable single step if it was enabled */
cpu_single_step(cpu, 0);
tb_flush(cpu);
if (sig != 0) {
gdb_set_stop_cpu(cpu);
g_string_printf(gdbserver_state.str_buf,
"T%02xthread:", gdb_target_signal_to_gdb(sig));
gdb_append_thread_id(cpu, gdbserver_state.str_buf);
g_string_append_c(gdbserver_state.str_buf, ';');
gdb_put_strbuf();
}
/*
* gdb_put_packet() might have detected that the peer terminated the
* connection.
*/
if (gdbserver_user_state.fd < 0) {
return sig;
}
sig = 0;
gdbserver_state.state = RS_IDLE;
gdbserver_user_state.running_state = 0;
while (gdbserver_user_state.running_state == 0) {
n = read(gdbserver_user_state.fd, buf, 256);
if (n > 0) {
int i;
for (i = 0; i < n; i++) {
gdb_read_byte(buf[i]);
}
} else {
/*
* XXX: Connection closed. Should probably wait for another
* connection before continuing.
*/
if (n == 0) {
close(gdbserver_user_state.fd);
}
gdbserver_user_state.fd = -1;
return sig;
}
}
sig = gdbserver_state.signal;
gdbserver_state.signal = 0;
return sig;
}
/* Tell the remote gdb that the process has exited due to SIG. */
void gdb_signalled(CPUArchState *env, int sig)
{
char buf[4];
if (!gdbserver_state.init || gdbserver_user_state.fd < 0) {
return;
}
snprintf(buf, sizeof(buf), "X%02x", gdb_target_signal_to_gdb(sig));
gdb_put_packet(buf);
}
static void gdb_accept_init(int fd)
{
gdb_init_gdbserver_state();
gdb_create_default_process(&gdbserver_state);
gdbserver_state.processes[0].attached = true;
gdbserver_state.c_cpu = gdb_first_attached_cpu();
gdbserver_state.g_cpu = gdbserver_state.c_cpu;
gdbserver_user_state.fd = fd;
gdb_has_xml = false;
}
static bool gdb_accept_socket(int gdb_fd)
{
int fd;
for (;;) {
fd = accept(gdb_fd, NULL, NULL);
if (fd < 0 && errno != EINTR) {
perror("accept socket");
return false;
} else if (fd >= 0) {
qemu_set_cloexec(fd);
break;
}
}
gdb_accept_init(fd);
return true;
}
static int gdbserver_open_socket(const char *path)
{
struct sockaddr_un sockaddr = {};
int fd, ret;
fd = socket(AF_UNIX, SOCK_STREAM, 0);
if (fd < 0) {
perror("create socket");
return -1;
}
sockaddr.sun_family = AF_UNIX;
pstrcpy(sockaddr.sun_path, sizeof(sockaddr.sun_path) - 1, path);
ret = bind(fd, (struct sockaddr *)&sockaddr, sizeof(sockaddr));
if (ret < 0) {
perror("bind socket");
close(fd);
return -1;
}
ret = listen(fd, 1);
if (ret < 0) {
perror("listen socket");
close(fd);
return -1;
}
return fd;
}
static bool gdb_accept_tcp(int gdb_fd)
{
struct sockaddr_in sockaddr = {};
socklen_t len;
int fd;
for (;;) {
len = sizeof(sockaddr);
fd = accept(gdb_fd, (struct sockaddr *)&sockaddr, &len);
if (fd < 0 && errno != EINTR) {
perror("accept");
return false;
} else if (fd >= 0) {
qemu_set_cloexec(fd);
break;
}
}
/* set short latency */
if (socket_set_nodelay(fd)) {
perror("setsockopt");
close(fd);
return false;
}
gdb_accept_init(fd);
return true;
}
static int gdbserver_open_port(int port)
{
struct sockaddr_in sockaddr;
int fd, ret;
fd = socket(PF_INET, SOCK_STREAM, 0);
if (fd < 0) {
perror("socket");
return -1;
}
qemu_set_cloexec(fd);
socket_set_fast_reuse(fd);
sockaddr.sin_family = AF_INET;
sockaddr.sin_port = htons(port);
sockaddr.sin_addr.s_addr = 0;
ret = bind(fd, (struct sockaddr *)&sockaddr, sizeof(sockaddr));
if (ret < 0) {
perror("bind");
close(fd);
return -1;
}
ret = listen(fd, 1);
if (ret < 0) {
perror("listen");
close(fd);
return -1;
}
return fd;
}
int gdbserver_start(const char *port_or_path)
{
int port = g_ascii_strtoull(port_or_path, NULL, 10);
int gdb_fd;
if (port > 0) {
gdb_fd = gdbserver_open_port(port);
} else {
gdb_fd = gdbserver_open_socket(port_or_path);
}
if (gdb_fd < 0) {
return -1;
}
if (port > 0 && gdb_accept_tcp(gdb_fd)) {
return 0;
} else if (gdb_accept_socket(gdb_fd)) {
gdbserver_user_state.socket_path = g_strdup(port_or_path);
return 0;
}
/* gone wrong */
close(gdb_fd);
return -1;
}
/* Disable gdb stub for child processes. */
void gdbserver_fork(CPUState *cpu)
{
if (!gdbserver_state.init || gdbserver_user_state.fd < 0) {
return;
}
close(gdbserver_user_state.fd);
gdbserver_user_state.fd = -1;
cpu_breakpoint_remove_all(cpu, BP_GDB);
/* no cpu_watchpoint_remove_all for user-mode */
}
/*
* Execution state helpers
*/
void gdb_handle_query_attached(GArray *params, void *user_ctx)
{
gdb_put_packet("0");
}
void gdb_continue(void)
{
gdbserver_user_state.running_state = 1;
trace_gdbstub_op_continue();
}
/*
* Resume execution, for user-mode emulation it's equivalent to
* gdb_continue.
*/
int gdb_continue_partial(char *newstates)
{
CPUState *cpu;
int res = 0;
/*
* This is not exactly accurate, but it's an improvement compared to the
* previous situation, where only one CPU would be single-stepped.
*/
CPU_FOREACH(cpu) {
if (newstates[cpu->cpu_index] == 's') {
trace_gdbstub_op_stepping(cpu->cpu_index);
cpu_single_step(cpu, gdbserver_state.sstep_flags);
}
}
gdbserver_user_state.running_state = 1;
return res;
}
/*
* Memory access helpers
*/
int gdb_target_memory_rw_debug(CPUState *cpu, hwaddr addr,
uint8_t *buf, int len, bool is_write)
{
CPUClass *cc;
cc = CPU_GET_CLASS(cpu);
if (cc->memory_rw_debug) {
return cc->memory_rw_debug(cpu, addr, buf, len, is_write);
}
return cpu_memory_rw_debug(cpu, addr, buf, len, is_write);
}
/*
* cpu helpers
*/
unsigned int gdb_get_max_cpus(void)
{
CPUState *cpu;
unsigned int max_cpus = 1;
CPU_FOREACH(cpu) {
max_cpus = max_cpus <= cpu->cpu_index ? cpu->cpu_index + 1 : max_cpus;
}
return max_cpus;
}
/* replay not supported for user-mode */
bool gdb_can_reverse(void)
{
return false;
}
/*
* Break/Watch point helpers
*/
bool gdb_supports_guest_debug(void)
{
/* user-mode == TCG == supported */
@@ -472,17 +65,3 @@ void gdb_breakpoint_remove_all(CPUState *cs)
{
cpu_breakpoint_remove_all(cs, BP_GDB);
}
/*
* For user-mode syscall support we send the system call immediately
* and then return control to gdb for it to process the syscall request.
* Since the protocol requires that gdb hands control back to us
* using a "here are the results" F packet, we don't need to check
* gdb_handlesig's return value (which is the signal to deliver if
* execution was resumed via a continue packet).
*/
void gdb_syscall_handling(const char *syscall_packet)
{
gdb_put_packet(syscall_packet);
gdb_handlesig(gdbserver_state.c_cpu, 0);
}

View File

@@ -31,11 +31,8 @@ EmailMap contrib/gitdm/domain-map
# identifiable corporate emails. Please keep this list sorted.
#
GroupMap contrib/gitdm/group-map-alibaba Alibaba
GroupMap contrib/gitdm/group-map-amd AMD
GroupMap contrib/gitdm/group-map-cadence Cadence Design Systems
GroupMap contrib/gitdm/group-map-codeweavers CodeWeavers
GroupMap contrib/gitdm/group-map-facebook Facebook
GroupMap contrib/gitdm/group-map-ibm IBM
GroupMap contrib/gitdm/group-map-janustech Janus Technologies
GroupMap contrib/gitdm/group-map-netflix Netflix

View File

@@ -993,17 +993,3 @@ SRST
``info virtio-queue-element`` *path* *queue* [*index*]
Display element of a given virtio queue
ERST
{
.name = "cryptodev",
.args_type = "",
.params = "",
.help = "show the crypto devices",
.cmd = hmp_info_cryptodev,
.flags = "p",
},
SRST
``info cryptodev``
Show the crypto devices.
ERST

View File

@@ -1486,7 +1486,6 @@ SRST
Inject an MCE on the given CPU (x86 only).
ERST
#ifdef CONFIG_POSIX
{
.name = "getfd",
.args_type = "fdname:s",
@@ -1502,7 +1501,6 @@ SRST
mechanism on unix sockets, it is stored using the name *fdname* for
later use by other monitor commands.
ERST
#endif
{
.name = "closefd",

View File

@@ -15,7 +15,7 @@ fs_ss.add(files(
))
fs_ss.add(when: 'CONFIG_LINUX', if_true: files('9p-util-linux.c'))
fs_ss.add(when: 'CONFIG_DARWIN', if_true: files('9p-util-darwin.c'))
fs_ss.add(when: 'CONFIG_XEN_BUS', if_true: files('xen-9p-backend.c'))
fs_ss.add(when: 'CONFIG_XEN', if_true: files('xen-9p-backend.c'))
softmmu_ss.add_all(when: 'CONFIG_FSDEV_9P', if_true: fs_ss)
specific_ss.add(when: 'CONFIG_VIRTIO_9P', if_true: files('virtio-9p-device.c'))

View File

@@ -22,7 +22,6 @@
#include "qemu/config-file.h"
#include "qemu/main-loop.h"
#include "qemu/option.h"
#include "qemu/iov.h"
#include "fsdev/qemu-fsdev.h"
#define VERSIONS "1"
@@ -242,7 +241,7 @@ static void xen_9pfs_push_and_notify(V9fsPDU *pdu)
xen_wmb();
ring->inprogress = false;
qemu_xen_evtchn_notify(ring->evtchndev, ring->local_port);
xenevtchn_notify(ring->evtchndev, ring->local_port);
qemu_bh_schedule(ring->bh);
}
@@ -325,8 +324,8 @@ static void xen_9pfs_evtchn_event(void *opaque)
Xen9pfsRing *ring = opaque;
evtchn_port_t port;
port = qemu_xen_evtchn_pending(ring->evtchndev);
qemu_xen_evtchn_unmask(ring->evtchndev, port);
port = xenevtchn_pending(ring->evtchndev);
xenevtchn_unmask(ring->evtchndev, port);
qemu_bh_schedule(ring->bh);
}
@@ -338,10 +337,10 @@ static void xen_9pfs_disconnect(struct XenLegacyDevice *xendev)
for (i = 0; i < xen_9pdev->num_rings; i++) {
if (xen_9pdev->rings[i].evtchndev != NULL) {
qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev),
NULL, NULL, NULL);
qemu_xen_evtchn_unbind(xen_9pdev->rings[i].evtchndev,
xen_9pdev->rings[i].local_port);
qemu_set_fd_handler(xenevtchn_fd(xen_9pdev->rings[i].evtchndev),
NULL, NULL, NULL);
xenevtchn_unbind(xen_9pdev->rings[i].evtchndev,
xen_9pdev->rings[i].local_port);
xen_9pdev->rings[i].evtchndev = NULL;
}
}
@@ -360,13 +359,12 @@ static int xen_9pfs_free(struct XenLegacyDevice *xendev)
if (xen_9pdev->rings[i].data != NULL) {
xen_be_unmap_grant_refs(&xen_9pdev->xendev,
xen_9pdev->rings[i].data,
xen_9pdev->rings[i].intf->ref,
(1 << xen_9pdev->rings[i].ring_order));
}
if (xen_9pdev->rings[i].intf != NULL) {
xen_be_unmap_grant_ref(&xen_9pdev->xendev,
xen_9pdev->rings[i].intf,
xen_9pdev->rings[i].ref);
xen_be_unmap_grant_refs(&xen_9pdev->xendev,
xen_9pdev->rings[i].intf,
1);
}
if (xen_9pdev->rings[i].bh != NULL) {
qemu_bh_delete(xen_9pdev->rings[i].bh);
@@ -449,12 +447,12 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
xen_9pdev->rings[i].inprogress = false;
xen_9pdev->rings[i].evtchndev = qemu_xen_evtchn_open();
xen_9pdev->rings[i].evtchndev = xenevtchn_open(NULL, 0);
if (xen_9pdev->rings[i].evtchndev == NULL) {
goto out;
}
qemu_set_cloexec(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev));
xen_9pdev->rings[i].local_port = qemu_xen_evtchn_bind_interdomain
qemu_set_cloexec(xenevtchn_fd(xen_9pdev->rings[i].evtchndev));
xen_9pdev->rings[i].local_port = xenevtchn_bind_interdomain
(xen_9pdev->rings[i].evtchndev,
xendev->dom,
xen_9pdev->rings[i].evtchn);
@@ -465,8 +463,8 @@ static int xen_9pfs_connect(struct XenLegacyDevice *xendev)
goto out;
}
xen_pv_printf(xendev, 2, "bind evtchn port %d\n", xendev->local_port);
qemu_set_fd_handler(qemu_xen_evtchn_fd(xen_9pdev->rings[i].evtchndev),
xen_9pfs_evtchn_event, NULL, &xen_9pdev->rings[i]);
qemu_set_fd_handler(xenevtchn_fd(xen_9pdev->rings[i].evtchndev),
xen_9pfs_evtchn_event, NULL, &xen_9pdev->rings[i]);
}
xen_9pdev->security_model = xenstore_read_be_str(xendev, "security_model");

View File

@@ -5,7 +5,8 @@
const VMStateDescription vmstate_acpi_pcihp_pci_status;
void acpi_pcihp_init(Object *owner, AcpiPciHpState *s, PCIBus *root_bus,
MemoryRegion *address_space_io, uint16_t io_base)
MemoryRegion *address_space_io, bool bridges_enabled,
uint16_t io_base)
{
return;
}
@@ -35,12 +36,8 @@ void acpi_pcihp_device_unplug_request_cb(HotplugHandler *hotplug_dev,
return;
}
void acpi_pcihp_reset(AcpiPciHpState *s)
void acpi_pcihp_reset(AcpiPciHpState *s, bool acpihp_root_off)
{
return;
}
bool acpi_pcihp_is_hotpluggbale_bus(AcpiPciHpState *s, BusState *bus)
{
return true;
}

View File

@@ -218,7 +218,7 @@ static bool vmstate_test_use_pcihp(void *opaque)
{
ICH9LPCPMRegs *s = opaque;
return s->acpi_pci_hotplug.use_acpi_hotplug_bridge;
return s->use_acpi_hotplug_bridge;
}
static const VMStateDescription vmstate_pcihp_state = {
@@ -277,8 +277,8 @@ static void pm_reset(void *opaque)
}
pm->smi_en_wmask = ~0;
if (pm->acpi_pci_hotplug.use_acpi_hotplug_bridge) {
acpi_pcihp_reset(&pm->acpi_pci_hotplug);
if (pm->use_acpi_hotplug_bridge) {
acpi_pcihp_reset(&pm->acpi_pci_hotplug, true);
}
acpi_update_sci(&pm->acpi_regs, pm->irq);
@@ -316,11 +316,12 @@ void ich9_pm_init(PCIDevice *lpc_pci, ICH9LPCPMRegs *pm, qemu_irq sci_irq)
acpi_pm_tco_init(&pm->tco_regs, &pm->io);
}
if (pm->acpi_pci_hotplug.use_acpi_hotplug_bridge) {
if (pm->use_acpi_hotplug_bridge) {
acpi_pcihp_init(OBJECT(lpc_pci),
&pm->acpi_pci_hotplug,
pci_get_bus(lpc_pci),
pci_address_space_io(lpc_pci),
true,
ACPI_PCIHP_ADDR_ICH9);
qbus_set_hotplug_handler(BUS(pci_get_bus(lpc_pci)),
@@ -402,14 +403,14 @@ static bool ich9_pm_get_acpi_pci_hotplug(Object *obj, Error **errp)
{
ICH9LPCState *s = ICH9_LPC_DEVICE(obj);
return s->pm.acpi_pci_hotplug.use_acpi_hotplug_bridge;
return s->pm.use_acpi_hotplug_bridge;
}
static void ich9_pm_set_acpi_pci_hotplug(Object *obj, bool value, Error **errp)
{
ICH9LPCState *s = ICH9_LPC_DEVICE(obj);
s->pm.acpi_pci_hotplug.use_acpi_hotplug_bridge = value;
s->pm.use_acpi_hotplug_bridge = value;
}
static bool ich9_pm_get_keep_pci_slot_hpc(Object *obj, Error **errp)
@@ -434,7 +435,7 @@ void ich9_pm_add_properties(Object *obj, ICH9LPCPMRegs *pm)
pm->disable_s3 = 0;
pm->disable_s4 = 0;
pm->s4_val = 2;
pm->acpi_pci_hotplug.use_acpi_hotplug_bridge = true;
pm->use_acpi_hotplug_bridge = true;
pm->keep_pci_slot_hpc = true;
pm->enable_tco = true;
@@ -578,12 +579,6 @@ void ich9_pm_device_unplug_cb(HotplugHandler *hotplug_dev, DeviceState *dev,
}
}
bool ich9_pm_is_hotpluggable_bus(HotplugHandler *hotplug_dev, BusState *bus)
{
ICH9LPCState *lpc = ICH9_LPC_DEVICE(hotplug_dev);
return acpi_pcihp_is_hotpluggbale_bus(&lpc->pm.acpi_pci_hotplug, bus);
}
void ich9_pm_ospm_status(AcpiDeviceIf *adev, ACPIOSTInfoList ***list)
{
ICH9LPCState *s = ICH9_LPC_DEVICE(adev);

Some files were not shown because too many files have changed in this diff Show More