Cleber and I are volunteering to review and queue patches for the
Python scripts and modules in scripts/.
I'm setting "M: Odd fixes" because not all scripts are actively
maintained.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170915230744.22942-1-ehabkost@redhat.com>
[ehabkost: add tests/*.py too]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Not all scripts using qemu.py configure the Python logging
module, and end up generating a "No handlers could be found for
logger" message instead of actual log messages.
To avoid requiring every script using qemu.py to configure
logging manually, call basicConfig() when creating a QEMUMachine
object. This won't affect scripts that already set up logging,
but will ensure that scripts that don't configure logging keep
working.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Fixes: 4738b0a85a
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170921162234.847-1-ehabkost@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Acked-by: Lukáš Doktor <ldoktor@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
MIPS patches 2017-09-21
Changes:
QOMify MIPS cpu
Improve macro parenthesization
# gpg: Signature made Thu 21 Sep 2017 13:50:37 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170921:
mips: Improve macro parenthesization
mips: replace cpu_mips_init() with cpu_generic_init()
mips: MIPSCPU model subclasses
mips: call cpu_mips_realize_env() from mips_cpu_realizefn()
mips: split cpu_mips_realize_env() out of cpu_mips_init()
mips: introduce internal.h and cleanup cpu.h
mips: move hw/mips/cputimer.c to target/mips/
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Although none of the existing macro call-sites were broken,
it's always better to write macros that properly parenthesize
arguments that can be complex expressions, so that the intended
order of operations is not broken.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
now cpu_mips_init() reimplements subset of cpu_generic_init()
tasks, so just drop it and use cpu_generic_init() directly.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h instead of cpu.h]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Register separate QOM types for each mips cpu model,
so it would be possible to reuse generic CPU creation
routines.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[PMD: use internal.h, use void* to hold cpu_def in MIPSCPUClass,
mark MIPSCPU abstract, address Eduardo Habkost review]
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
This changes the order between cpu_mips_realize_env() and
cpu_exec_initfn(), but cpu_exec_initfn() don't have anything that
depends on cpu_mips_realize_env() being called first.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
This timer is a required part of the MIPS32/MIPS64 System Control coprocessor
(CP0). Moving it with the other architecture related files will allow an opaque
use of CPUMIPSState* in the next commit (introduce "internal.h").
also remove it from 'user' targets, remove an unnecessary include.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
When a MSI interrupt is bound to a guest using
xc_domain_update_msi_irq (XEN_DOMCTL_bind_pt_irq) the interrupt is
left masked by default.
This causes problems with guests that first configure interrupts and
clean the per-entry MSIX table mask bit and afterwards enable MSIX
globally. In such scenario the Xen internal msixtbl handlers would not
detect the unmasking of MSIX entries because vectors are not yet
registered since MSIX is not enabled, and vectors would be left
masked.
Introduce a new flag in the gflags field to signal Xen whether a MSI
interrupt should be unmasked after being bound.
This also requires to track the mask register for MSI interrupts, so
QEMU can also notify to Xen whether the MSI interrupt should be bound
masked or unmasked
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reported-by: Andreas Kinzler <hfp@posteo.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
g_malloc0_n is available since glib-2.24. To allow build with older glib
versions use the generic g_new0, which is already used in many other
places in the code.
Fixes commit 3284fad728 ("xen-disk: add support for multi-page shared rings")
Signed-off-by: Olaf Hering <olaf@aepfle.de>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
These patches fix regressions in 2.10
# gpg: Signature made Wed 20 Sep 2017 07:51:07 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: check the size of transport buffer before marshaling
9pfs: fix name_to_path assertion in v9fs_complete_rename()
9pfs: fix readdir() for 9p2000.u
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Machine/CPU/NUMA queue, 2017-09-19
# gpg: Signature made Tue 19 Sep 2017 21:17:01 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/machine-next-pull-request:
MAINTAINERS: Update git URLs for my trees
hw/acpi-build: Fix SRAT memory building in case of node 0 without RAM
NUMA: Replace MAX_NODES with nb_numa_nodes in for loop
numa: cpu: calculate/set default node-ids after all -numa CLI options are parsed
arm: drop intermediate cpu_model -> cpu type parsing and use cpu type directly
pc: use generic cpu_model parsing
vl.c: convert cpu_model to cpu type and set of global properties before machine_init()
cpu: make cpu_generic_init() abort QEMU on error
qom: cpus: split cpu_generic_init() on feature parsing and cpu creation parts
hostmem-file: Add "discard-data" option
osdep: Define QEMU_MADV_REMOVE
vl: Clean up user-creatable objects when exiting
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
v9fs_do_readdir_with_stat() should check for a maximum buffer size
before an attempt to marshal gathered data. Otherwise, buffers assumed
as misconfigured and the transport would be broken.
The patch brings v9fs_do_readdir_with_stat() in conformity with
v9fs_do_readdir() behavior.
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused my commit 8d37de41ca # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>
The third parameter of v9fs_co_name_to_path() must not contain `/'
character.
The issue is most likely related to 9p2000.u protocol only.
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
[groug, regression caused by commit f57f587857 # 2.10]
Signed-off-by: Greg Kurz <groug@kaod.org>
If the client is using 9p2000.u, the following occurs:
$ cd ${virtfs_shared_dir}
$ mkdir -p a/b/c
$ ls a/b
ls: cannot access 'a/b/a': No such file or directory
ls: cannot access 'a/b/b': No such file or directory
a b c
instead of the expected:
$ ls a/b
c
This is a regression introduced by commit f57f5878578a;
local_name_to_path() now resolves ".." and "." in paths,
and v9fs_do_readdir_with_stat()->stat_to_v9stat() then
copies the basename of the resulting path to the response.
With the example above, this means that "." and ".." are
turned into "b" and "a" respectively...
stat_to_v9stat() currently assumes it is passed a full
canonicalized path and uses it to do two different things:
1) to pass it to v9fs_co_readlink() in case the file is a symbolic
link
2) to set the name field of the V9fsStat structure to the basename
part of the given path
It only has two users: v9fs_stat() and v9fs_do_readdir_with_stat().
v9fs_stat() really needs 1) and 2) to be performed since it starts
with the full canonicalized path stored in the fid. It is different
for v9fs_do_readdir_with_stat() though because the name we want to
put into the V9fsStat structure is the d_name field of the dirent
actually (ie, we want to keep the "." and ".." special names). So,
we only need 1) in this case.
This patch hence adds a basename argument to stat_to_v9stat(), to
be used to set the name field of the V9fsStat structure, and moves
the basename logic to v9fs_stat().
Signed-off-by: Jan Dakinevich <jan.dakinevich@gmail.com>
(groug, renamed old name argument to path and updated changelog)
Signed-off-by: Greg Kurz <groug@kaod.org>
List the branches where I queue patches for Machine Core, NUMA,
Memory Backends, and X86. Update the NUMA section to list the
"machine-next" branch instead of "numa".
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170901153928.17058-1-ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Currently, Using the fisrt node without memory on the machine makes
QEMU unhappy. With this example command line:
... \
-m 1024M,slots=4,maxmem=32G \
-numa node,nodeid=0 \
-numa node,mem=1024M,nodeid=1 \
-numa node,nodeid=2 \
-numa node,nodeid=3 \
Guest reports "No NUMA configuration found" and the NUMA topology is
wrong.
This is because when QEMU builds ACPI SRAT, it regards node 0 as the
default node to deal with the memory hole(640K-1M). this means the
node0 must have some memory(>1M), but, actually it can have no
memory.
Fix this problem by cut out the 640K hole in the same way the PCI
4G hole does.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Message-Id: <1504231805-30957-2-git-send-email-douly.fnst@cn.fujitsu.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
In QEMU, the number of the NUMA nodes is determined by parse_numa_opts().
Then, QEMU uses it for iteration, for example:
for (i = 0; i < nb_numa_nodes; i++)
However, in memory_region_allocate_system_memory(), it uses MAX_NODES
not nb_numa_nodes.
So, replace MAX_NODES with nb_numa_nodes to keep code consistency and
reduce the loop times.
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Message-Id: <1503387936-3483-1-git-send-email-douly.fnst@cn.fujitsu.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Calculating default node-ids for CPUs in possible_cpu_arch_ids()
is rather fragile since defaults calculation uses nb_numa_nodes but
callback might be potentially called early before all -numa CLI
options are parsed, which would lead to cpus assigned only upto
nb_numa_nodes at the time possible_cpu_arch_ids() is called.
Issue was introduced by
(7c88e65 numa: mirror cpu to node mapping in MachineState::possible_cpus)
and for example CLI:
-smp 4 -numa node,cpus=0 -numa node
would set props.node-id in possible_cpus array for every non
explicitly mapped CPU to the first node.
Issue is not visible to guest nor to mgmt interface due to
1) implictly mapped cpus are forced to the first node in
case of partial mapping
2) in case of default mapping possible_cpu_arch_ids() is
called after all -numa options are parsed (resulting
in correct mapping).
However it's fragile to rely on late execution of
possible_cpu_arch_ids(), therefore add machine specific
callback that returns node-id for CPU and use it to calculate/
set defaults at machine_numa_finish_init() time when all -numa
options are parsed.
Reported-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1496314408-163972-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Assorted s390x patches:
- introduce virtio-gpu-ccw, with virtio-gpu endian fixes
- lots of cleanup in the s390x code
- make device_add work for s390x cpus
- enable seccomp on s390x
- an ivshmem endian fix
- set the reserved DHCP client architecture id for netboot
- fixes in the css and pci support
# gpg: Signature made Tue 19 Sep 2017 17:39:45 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170919-v2: (38 commits)
MAINTAINERS/s390x: add terminal3270.c
virtio-ccw: Create a virtio gpu device for the ccw bus
virtio-gpu: Handle endian conversion
s390x/ccw: create s390 phb for compat reasons as well
configure: Allow --enable-seccomp on s390x, too
virtio-ccw: remove stale comments on endianness
s390x: allow CPU hotplug in random core-id order
s390x: generate sclp cpu information from possible_cpus
s390x: get rid of cpu_s390x_create()
s390x: get rid of cpu_states and use possible_cpus instead
s390x: implement query-hotpluggable-cpus
s390x: CPU hot unplug via device_del cannot work for now
s390x: allow cpu hotplug via device_add
s390x: print CPU definitions in sorted order
target/s390x: rename next_cpu_id to next_core_id
target/s390x: use "core-id" for cpu number/address/id handling
target/s390x: set cpu->id for linux user when realizing
s390x: allow only 1 CPU with TCG
target/s390x: use program_interrupt() in per_check_exception()
target/s390x: use trigger_pgm_exception() in s390_cpu_handle_mmu_fault()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
d32bd032d8 ("s390x/ccw: create s390 phb conditionally") made
registering the s390 pci host bridge conditional on presense
of the zpci facility bit. Sadly, that breaks migration from
machines that did not use the cpu model (2.7 and previous).
Create the s390 phb for pre-cpu model machines as well: We can
tweak s390_has_feat() to always indicate the zpci facility bit
when no cpu model is available (on 2.7 and previous compat machines).
Fixes: d32bd032d8 ("s390x/ccw: create s390 phb conditionally")
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We have two stale comments suggesting one should think about virtio
config space endianness a bit longer. We have just done that, and came to
the conclusion we are fine as is: it's the responsibility of the virtio
device and not of the transport (and that is how it works now). Putting
the responsibility into the transport isn't even possible, because the
transport would have to know about the config space layout of each
device.
Let us remove the stale comments.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Suggested-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20170914105535.47941-1-pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
SCLP correctly indicates the core-id aka. CPU address for each available
CPU.
As the core-id corresponds to cpu_index, also a newly created kvm vcpu
gets assigned this core-id as vcpu id. So SIGP in the kernel works
correctly (it uses the vcpu id to lookup the correct CPU).
So there should be nothing hindering us from hotplugging CPUs in random
core-id order.
This now makes sure that the output from "query-hotpluggable-cpus"
is completely true. Until now, a specific order is implicit. Performance
vice, hotplugging CPUs in non-sequential order might not be the best thing
to do, as VCPU lookup inside KVM might be a little slower. But that
doesn't hinder us from supporting it.
next_core_id is now used by linux user only.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-23-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
This is the first step to allow hot plugging of CPUs in a non-sequential
order. If a cpu is available ("plugged") can directly be decided by
looking at the cpu state pointer.
This makes sure, that really only cpus attached to the machine are
reported.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-22-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Now that there is only one user of cpu_s390x_create() left, make cpu
creation look like on x86.
- Perform the model/properties split and checks in s390_init_cpus()
- Parse features only once without having to remember if already parsed
- Pass only the typename to s390x_new_cpu()
- Use the typename of an existing CPU for hotplug via cpu-add
Acked-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-21-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
device_del on a CPU will currently do nothing. Let's emit an error
telling that this is will currently not work (there is no architecture
support on s390x). Error message copied from ppc.
(qemu) device_del cpu1
device_del cpu1
CPU hot unplug not supported on this machine
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-18-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
E.g. the following now works:
device_add host-s390-cpu,id=cpu1,core-id=1
The system will perform the same checks as when using cpu_add:
- If the core_id is already in use
- If the next sequential core_id isn't used
- If core-id >= max_cpu is specified
In addition, mixed CPU models are checked. E.g. if starting with
-cpu host and trying to hotplug "qemu-s390-cpu":
"Mixed CPU models are not supported on s390x."
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-17-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Some time ago we discussed that using "id" as property name is not the
right thing to do, as it is a reserved property for other devices and
will not work with device_add.
Switch to the term "core-id" instead, and use it as an equivalent to
"CPU address" mentioned in the PoP. There is no such thing as cpu number,
so rename env.cpu_num to env.core_id. We use "core-id" as this is the
common term to use for device_add later on (x86 and ppc).
We can get rid of cpu->id now. Keep cpu_index and env->core_id in sync.
cpu_index was already implicitly used by e.g. cpu_exists(), so keeping
both in sync seems to be the right thing to do.
cpu_index will now no longer automatically get set via
cpu_exec_realizefn(). For now, we were lucky that both implicitly stayed
in sync.
Our new cpu property "core-id" can be a static property. Range checks can
be avoided by using the correct type and the "setting after realized"
check is done implicitly.
device_add will later need the reserved "id" property. Hotplugging a CPU
on s390x will then be: "device_add host-s390-cpu,id=cpu2,core-id=2".
Reviewed-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-14-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Specifying more than 1 CPU (e.g. -smp 5) leads to SIGP errors (the
guest tries to bring these CPUs up but fails), because we don't support
multiple CPUs on s390x under TCG.
Let's bail out if more than 1 is specified, so we don't raise people's
hope.
Tested-by: Matthew Rosato <mjrosato@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170913132417.24384-12-david@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Back then in the time of df1fe5bb49 ("s390: Virtual channel subsystem
support.", 2013-01-24) -EIO used to map to a channel-program check (via
the default label of the switch statement). Then 2dc95b4cac
("s390x/3270: 3270 data stream handling", 2016-04-01) came along
and that changed dramatically.
Let us roll back this undesired side effect, and go back to
channel-program check.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Fixes: 2dc95b4cac "s390x/3270: 3270 data stream handling"
Message-Id: <20170908152446.14606-3-pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The architecture says that channel-data check is indicating that
an uncorrected storage (memory) error has been detected in regard
to the data residing in main storage (memory) that is currently
used for an I/O operation. The described detection is done using
the CBC technology.
The ccw interpretation code is however generating a channel-data check
effectively when the (device specific) ccw_cb returns -EFAULT. In case
of virtio-ccw devices this happens when mapping memory fails, or when a
NULL pointer is encountered. So this behavior is not architecture
conform.
Furthermore the best fit for these situations (null pointer, mapping a
piece of guest memory fails) from architectural perspective the condition
described as the channel subsystem refers to a location that is not
available, which when encountered shall result in a channel-program
check.
To fix this, all we have to do is to get rid of the switch case matching
-EFAULT: the default is generating a channel-program check.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170908152446.14606-2-pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The "slow" ivshmem-tests currently fail when they are running on a
big endian host:
$ uname -m
ppc64
$ V=1 QTEST_QEMU_BINARY=x86_64-softmmu/qemu-system-x86_64 tests/ivshmem-test -m slow
/x86_64/ivshmem/single: OK
/x86_64/ivshmem/hotplug: OK
/x86_64/ivshmem/memdev: OK
/x86_64/ivshmem/pair: OK
/x86_64/ivshmem/server-msi: qemu-system-x86_64:
-device ivshmem-doorbell,chardev=chr0,vectors=2: server sent invalid ID message
Broken pipe
The problem is that the server side code in ivshmem_server_send_one_msg()
correctly translates all messages IDs into little endian 64-bit values,
but the client side code in the ivshmem_recv_msg() function does not swap
the byte order back. Fix it by passing the value through le64_to_cpu().
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1504100343-26607-1-git-send-email-thuth@redhat.com>
Tested-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The guest uses the mpcifc instruction to register the aibvo of a zpci
device, which is the starting offset of indicators in the indicator
area and thus remains constant. Each msix vector is an offset from the
aibvo. When we map a msix route to an adapter route, we should not
modify the starting offset, but instead add the vector to the starting
offset to get the absolute offset in the specific route.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Message-Id: <1504606380-49341-3-git-send-email-zyimin@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
PCIDevice pointer has been a parameter of kvm_arch_fixup_msi_route().
So we don't need to store zpci idx in msix message data to find out the
specific zpci device. Instead, we could use pci device id to find its
corresponding zpci device.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Message-Id: <1504606380-49341-2-git-send-email-zyimin@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
We can use the drive_del test on s390x, too, to check that adding and
deleting also works fine with the virtio-ccw bus. But we have to make
sure that we use the devices with the "-ccw" suffix instead of the
"-pci" suffix for the virtio-ccw transport on s390x. Introduce a helper
function called qvirtio_get_dev_type() that returns the correct string
for the current architecture.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1504190408-11143-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The function ioinst_handle_xsch is presenting cc 2 when it's supposed to
present cc 1 and the other way around, because css_do_xsch has the error
codes mixed up. Because cc 1 has precedence over cc 2 we also have to
swap the two checks.
Let us fix this.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reported-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Message-Id: <20170831121828.85885-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The pixman submodule does not exist anymore, and its removal broke
docker-based tests. Fix it.
Cc: Fam Zheng <famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using $(and ...) is dangerous here: It only works as long as the first
argument is set to 'y' or completely unset. It does not work if the
first argument is set to 'n' for example. Let's use the "land" make
function instead which has been written explicitely for this purpose.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1505759538-15365-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The script doesn't know about all possible types and learn them as
it parses the code. If it reaches a line with a type cast but the
type isn't known yet, it is misinterpreted as an identifier.
For example the following line:
foo = (hwaddr) -1;
results in the following false-positive to be reported:
ERROR: spaces required around that '-' (ctx:VxV)
Let's add this standard QEMU type to the list of pre-known types.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <150538015789.8149.10902725348939486674.stgit@bahia.lan>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently before submitting a series, devs should run checkpatch.pl
across each patch to be submitted. This can be automated using a
command such as:
git rebase -i master -x 'git show | ./scripts/checkpatch.pl -'
This is rather long winded to type, so this patch introduces a way
to tell checkpatch.pl to validate a series of GIT revisions.
There are now three modes it can operate in 1) check a patch 2) check a source
file, or 3) check a git branch.
If no flags are given, the mode is determined by checking the args passed to
the command. If the args contain a literal ".." it is treated as a GIT revision
list. If the args end in ".patch" or equal "-" it is treated as a patch file.
Otherwise it is treated as a source file.
This automatic guessing can be overridden using --[no-]patch --[no-]file or
--[no-]branch
For example to check a GIT revision list:
$ ./scripts/checkpatch.pl master..
total: 0 errors, 0 warnings, 297 lines checked
b886d352a2bf58f0996471fb3991a138373a2957 has no obvious style problems and is ready for submission.
total: 0 errors, 0 warnings, 182 lines checked
2a731f9a9ce145e0e0df6d42dd2a3ce4dfc543fa has no obvious style problems and is ready for submission.
total: 0 errors, 0 warnings, 102 lines checked
11844169bcc0c8ed4449eb3744a69877ed329dd7 has no obvious style problems and is ready for submission.
If a genuine patch filename contains the characters '..' it is
possible to force interpretation of the arg as a patch
$ ./scripts/checkpatch.pl --patch master..
will force it to load a patch file called "master..", or equivalently
$ ./scripts/checkpatch.pl --no-branch master..
will simply turn off guessing of GIT revision lists.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170913091000.9005-1-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
All definitions related to Hyper-V emulation are now taken from the QEMU
own header, so the one imported from the kernel is no longer needed.
Unfortunately it's included by kvm_para.h.
So, until this is fixed in the kernel, teach the header harvesting
script to substitute kernel's hyperv.h with a dummy.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20170713201522.13765-3-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The definitions for Hyper-V emulation are currently taken from a header
imported from the Linux kernel.
However, as these describe a third-party protocol rather than a kernel
API, it probably wasn't a good idea to publish it in the kernel uapi.
This patch introduces a header that provides all the necessary
definitions, superseding the one coming from the kernel.
The new header supports (temporary) coexistence with the kernel one.
The constants explicitly named in the Hyper-V specification (e.g. msr
numbers) are defined in a non-conflicting way. Other constants and
types have got new names.
While at this, the protocol data structures are defined in a more
conventional way, without bitfields, enums, and excessive unions.
The code using this stuff is adjusted, too; it can now be built both
with and without the kernel header in the tree.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Message-Id: <20170713201522.13765-2-rkagan@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Starting with Windows Server 2012 and Windows 8, if
CPUID.40000005.EAX contains a value of -1, Windows assumes specific
limit to the number of VPs. In this case, Windows Server 2012
guest VMs may use more than 64 VPs, up to the maximum supported
number of processors applicable to the specific Windows
version being used.
https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/reference/tlfs
For compatibility, Let's introduce a new property for X86CPU,
named "x-hv-max-vps" as Eduardo's suggestion, and set it
to 0x40 before machine 2.10.
(The "x-" prefix indicates that the property is not supposed to
be a stable user interface.)
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1505143227-14324-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Convert any remaining uses of fprintf(stderr, "warning:"...
to use warn_report() instead. This helps standardise on a single
method of printing warnings to the user.
All of the warnings were changed using this command:
find ./* -type f -exec sed -i 's|fprintf(.*".*warning[,:] |warn_report("|Ig' {} +
The #include lines and chagnes to the test Makefile were manually
updated to allow the code to compile.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-Id: <2c94ac3bb116cc6b8ebbcd66a254920a69665515.1503077821.git.alistair.francis@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This test provides its own mocks, so do not use the "standard"
stubs in libqemustub.a or the event loop implementation in
libqemuutil.a.
This is required on OS X, which otherwise brings in qemu-timer.o,
async.o and main-loop.o from libqemuutil.a.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Convert all the multi-line uses of fprintf(stderr, "warning:"..."\n"...
to use warn_report() instead. This helps standardise on a single
method of printing warnings to the user.
All of the warnings were changed using these commands:
find ./* -type f -exec sed -i \
'N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
find ./* -type f -exec sed -i \
'N;N;N;N;N;N;N; {s|fprintf(.*".*warning[,:] \(.*\)\\n"\(.*\));|warn_report("\1"\2);|Ig}' \
{} +
Indentation fixed up manually afterwards.
Some of the lines were manually edited to reduce the line length to below
80 charecters. Some of the lines with newlines in the middle of the
string were also manually edit to avoid checkpatch errrors.
The #include lines were manually updated to allow the code to compile.
Several of the warning messages can be improved after this patch, to
keep this patch mechanical this has been moved into a later patch.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Igor Mammedov <imammedo@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Cornelia Huck <cohuck@redhat.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Jason Wang <jasowang@redhat.com>
Cc: David Gibson <david@gibson.dropbear.id.au>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <5def63849ca8f551630c6f2b45bcb1c482f765a6.1505158760.git.alistair.francis@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Flatview will make sure that we can only end up in this function with
memory sections that correspond to exactly one slot. So we don't
have to iterate multiple times. There won't be overlapping slots but
only matching slots.
Properly align the section and look up the corresponding slot. This
heavily simplifies this function.
We can now get rid of kvm_lookup_overlapping_slot().
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170911174933.20789-7-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The way flatview handles memory sections, we will never have overlapping
memory sections in kvm.
address_space_update_topology_pass() will make sure that we will only
get called for
a) an existing memory section for which we only update parameters
(log_start, log_stop).
b) an existing memory section we want to delete (region_del)
c) a brand new memory section we want to add (region_add)
We cannot have overlapping memory sections in kvm as we will first remove
the overlapping sections and then add the ones without conflicts.
Therefore we can remove the complexity for handling prefix and suffix
slots.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170911174933.20789-5-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We already require DESTROY_MEMORY_REGION_WORKS, JOIN_MEMORY_REGIONS_WORKS
was added just half a year later.
In addition, with flatview overlapping memory regions are first
removed before adding the changed one. So we can't really detect joining
memory regions this way.
Let's just get rid of this special handling.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170911174933.20789-2-david@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While loading kernel via multiboot-v1 image, (flags & 0x00010000)
indicates that multiboot header contains valid addresses to load
the kernel image. These addresses are used to compute kernel
size and kernel text offset in the OS image. Validate these
address values to avoid an OOB access issue.
This is CVE-2017-14167.
Reported-by: Thomas Garnier <thgarnie@google.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Message-Id: <20170907063256.7418-1-ppandit@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
SunOS declares struct queue in <netinet/in.h>.
This fixes build on SmartOS (Joyent).
Patch cherry-picked from pkgsrc by jperkin (Joyent).
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
Message-Id: <20170903163304.17919-1-n54@gmx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
As of kernel commit eb82feea59d6 ("KVM: hyperv: support HV_X64_MSR_TSC_FREQUENCY
and HV_X64_MSR_APIC_FREQUENCY"), KVM supports two new MSRs which are required
for nested Hyper-V to read timestamps with RDTSC + TSC page.
This commit makes QEMU advertise the MSRs with CPUID.40000003H:EAX[11] and
CPUID.40000003H:EDX[8] as specified in the Hyper-V TLFS and experimentally
verified on a Hyper-V host. The feature is enabled with the existing hv-time CPU
flag, and only if the TSC frequency is stable across migrations and known.
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170807085703.32267-5-lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
there are 2 use cases to deal with:
1: fixed CPU models per board/soc
2: boards with user configurable cpu_model and fallback to
default cpu_model if user hasn't specified one explicitly
For the 1st
drop intermediate cpu_model parsing and use const cpu type
directly, which replaces:
typename = object_class_get_name(
cpu_class_by_name(TYPE_ARM_CPU, cpu_model))
object_new(typename)
with
object_new(FOO_CPU_TYPE_NAME)
or
cpu_generic_init(BASE_CPU_TYPE, "my cpu model")
with
cpu_create(FOO_CPU_TYPE_NAME)
as result 1st use case doesn't have to invoke not necessary
translation and not needed code is removed.
For the 2nd
1: set default cpu type with MachineClass::default_cpu_type and
2: use generic cpu_model parsing that done before machine_init()
is run and:
2.1: drop custom cpu_model parsing where pattern is:
typename = object_class_get_name(
cpu_class_by_name(TYPE_ARM_CPU, cpu_model))
[parse_features(typename, cpu_model, &err) ]
2.2: or replace cpu_generic_init() which does what
2.1 does + create_cpu(typename) with just
create_cpu(machine->cpu_type)
as result cpu_name -> cpu_type translation is done using
generic machine code one including parsing optional features
if supported/present (removes a bunch of duplicated cpu_model
parsing code) and default cpu type is defined in an uniform way
within machine_class_init callbacks instead of adhoc places
in boadr's machine_init code.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1505318697-77161-6-git-send-email-imammedo@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
define default CPU type in generic way in pc_machine_class_init()
and let common machine code to handle cpu_model parsing
Patch also introduces TARGET_DEFAULT_CPU_TYPE define for 2 purposes:
* make foo_machine_class_init() look uniform on every target
* use define in [bsd|linux]-user targets to pick default
cpu type
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1505318697-77161-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
All machines that support user specified cpu_model either call
cpu_generic_init() or cpu_class_by_name()/CPUClass::parse_features
to parse feature string and to get CPU type to create.
Which leads to code duplication and hard-codding default CPU model
within machine_foo_init() code. Which makes it impossible to
get CPU type before machine_init() is run.
So instead of setting default CPUs models and doing parsing in
target specific machine_foo_init() in various ways, provide
a generic data driven cpu_model parsing before machine_init()
is called.
in follow up per target patches, it will allow to:
* define default CPU type in consistent/generic manner
per machine type and drop custom code that fallbacks
to default if cpu_model is NULL
* drop custom features parsing in targets and do it
in centralized way.
* for cases of
cpu_generic_init(TYPE_BASE/DEFAULT_CPU, "some_cpu")
replace it with
cpu_create(machine->cpu_type) || cpu_create(TYPE_FOO)
depending if CPU type is user settable or not.
not doing useless parsing and clearly documenting where
CPU model is user settable or fixed one.
Patch allows machine subclasses to define default CPU type
per machine class at class_init() time and if that is set
generic code will parse cpu_model into a MachineState::cpu_type
which will be used to create CPUs for that machine instance
and allows gradual per board conversion.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1505318697-77161-4-git-send-email-imammedo@redhat.com>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Almost every user of cpu_generic_init() checks for
returned NULL and then reports failure in a custom way
and aborts process.
Some users assume that call can't fail and don't check
for failure, though they should have checked for it.
In either cases cpu_generic_init() failure is fatal,
so instead of checking for failure and reporting
it various ways, make cpu_generic_init() report
errors in consistent way and terminate QEMU on failure.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1505318697-77161-3-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Complete the transition by renaming this header, which was
shared by block/iscsi.c and the SCSI emulation code.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The new option can be used to indicate that the file contents can
be destroyed and don't need to be flushed to disk when QEMU exits
or when the memory backend object is removed.
Internally, it will trigger a madvise(MADV_REMOVE) call when the
memory backend is removed.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170824192315.5897-4-ehabkost@redhat.com>
[ehabkost: fixup: improved documentation]
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Tested-by: Zack Cornelius <zack.cornelius@kove.net>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Move more knowledge of SG_IO out of hw/scsi/scsi-generic.c, for
reusability.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move more knowledge of sense data format out of hw/scsi/scsi-bus.c
for reusability.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
util/scsi.c includes some SCSI code that is shared by block/iscsi.c and
hw/scsi, but the introduction of the persistent reservation helper
will add many more instances of this. There is also include/block/scsi.h,
which actually is not part of the core block layer.
The persistent reservation manager will also need a home. A scsi/
directory provides one for both the aforementioned shared code and
the PR manager code.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
After introducing the scsi/ subdirectory, there will be a scsi_build_sense
function that is the same as scsi_req_build_sense but without needing
a SCSIRequest. The existing scsi_build_sense function gets in the way,
remove it.
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since Linux switched to blk-mq as the default in Linux commit
5c279bd9e406 ("scsi: default to scsi-mq"), virtio-scsi LUNs consume
about 10x as much guest kernel memory.
This commit allows you to choose the virtqueue size for each
virtio-scsi-pci controller like this:
-device virtio-scsi-pci,id=scsi,virtqueue_size=16
The default is still 128 as before. Using smaller virtqueue_size
allows many more disks to be added to small memory virtual machines.
For a 1 vCPU, 500 MB, no swap VM I observed:
With scsi-mq enabled (upstream kernel): 175 disks
-"- ditto -"- virtqueue_size=64: 318 disks
-"- ditto -"- virtqueue_size=16: 775 disks
With scsi-mq disabled (kernel before 5c279bd9e406): 1755 disks
Note that to have any effect, this requires a kernel patch:
https://lkml.org/lkml/2017/8/10/689
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
Message-Id: <20170810165255.20865-1-rjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SSE4.1 phminposuw instruction finds the minimum 16-bit element in
the source vector, putting the value of that element in the low 16
bits of the destination vector, the index of that element in the next
three bits and zeroing the rest of the destination. The helper for
this operation fills the destination from high to low, meaning that
when the source and destination are the same register, the minimum
source element can be overwritten before it is copied to the
destination. This patch fixes it to fill the destination from low to
high instead, so the minimum source element is always copied first.
This fixes one gcc test failure in my GCC 6-based testing (and so
concludes the present sequence of patches, as I don't have any further
gcc test failures left in that testing that I attribute to QEMU bugs).
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708111422580.11919@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
One of the cases of the SSE4.2 pcmpestri / pcmpestrm / pcmpistri /
pcmpistrm instructions does a substring search. The implementation of
this case in the pcmpxstrx helper is incorrect. The operation in this
case is a search for a string (argument d to the helper) in another
string (argument s to the helper); if a copy of d at a particular
position would run off the end of s, the resulting output bit should
be 0 whether or not the strings match in the region where they
overlap, but the QEMU implementation was wrongly comparing only up to
the point where s ends and counting it as a match if an initial
segment of d matched a terminal segment of s. Here, "run off the end
of s" means that some byte of d would overlap some byte outside of s;
thus, if d has zero length, it is considered to match everywhere,
including after the end of s. This patch fixes the implementation to
correspond with the proper instruction semantics. This fixes four gcc
test failures in my GCC 6-based testing.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708102139310.8101@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SSE4.1 packusdw instruction combines source and destination
vectors of signed 32-bit integers into a single vector of unsigned
16-bit integers, with unsigned saturation. When the source and
destination are the same register, this means each 32-bit element of
that register is used twice as an input, to produce two of the 16-bit
output elements, and so if the operation is carried out
element-by-element in-place, no matter what the order in which it is
applied to the elements, the first element's operation will overwrite
some future input. The helper for packssdw avoids this issue by
computing the result in a local temporary and copying it to the
destination at the end; this patch fixes the packusdw helper to do
likewise. This fixes three gcc test failures in my GCC 6-based
testing.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708100023050.9262@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It turns out that my recent fix to set rip_offset when emulating some
SSE4.1 instructions needs generalizing to cover a wider class of
instructions. Specifically, every instruction in the sse_op_table7
table, coming from various instruction set extensions, has an 8-bit
immediate operand that comes after any memory operand, and so needs
rip_offset set for correctness if there is a memory operand that is
rip-relative, and my patch only set it for a subset of those
instructions. This patch moves the rip_offset setting to cover the
wider class of instructions, so fixing 9 further gcc testsuite
failures in my GCC 6-based testing. (I do not know whether there
might be still further classes of instructions missing this setting.)
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708082350340.23380@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The SSE4.1 pmovsx* and pmovzx* instructions take packed 1-byte, 2-byte
or 4-byte inputs and sign-extend or zero-extend them to a wider vector
output. The associated helpers for these instructions do the
extension on each element in turn, starting with the lowest. If the
input and output are the same register, this means that all the input
elements after the first have been overwritten before they are read.
This patch makes the helpers extend starting with the highest element,
not the lowest, to avoid such overwriting. This fixes many GCC test
failures (161 in the gcc testsuite in my GCC 6-based testing) when
testing with a default CPU setting enabling those instructions.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708082018390.23380@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The current FIS printing routines dump the FIS to screen. adjust this
such that it dumps to buffer instead, then use this ability to have
FIS dump mechanisms via trace-events instead of compiled defines.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170901001502.29915-9-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Create a new enum so that we can name the IRQ bits, which will make debugging
them a little nicer if we can print them out. Not handled in this patch, but
this will make it possible to get a nice debug printf detailing exactly which
status bits are set, as it can be multiple at any given time.
As a consequence of this patch, it is no longer possible to set multiple IRQ
codes at once, but nothing was utilizing this ability anyway.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170901001502.29915-8-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Remove the DEBUG_IDE preprocessor definition with something more
appropriately flexible, using the trace-events subsystem.
This will be less prone to bitrot and will more effectively allow
us to target just the functions we care about.
Signed-off-by: John Snow <jsnow@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170901001502.29915-2-jsnow@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
QEMU currently aborts with an assertion message when the user is trying
to remove a dscm1xxxx again:
$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add dscm1xxxx,id=xyz
(qemu) device_del xyz
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)
Looks like this device has to be wired up in code and is not meant
to be hot-pluggable, so let's mark it with user_creatable = false.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1503543783-17192-1-git-send-email-thuth@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
Fixes read after freeing error reported
https://lists.gnu.org/archive/html/qemu-devel/2017-08/msg04243.html
Message-Id: <59a56959-ca12-ea75-33fa-ff07eba1b090@redhat.com>
ich9-ahci device creates ide buses and attaches them as QOM children
at realize time, however it forgets to properly clean them up
at unrealize time and frees memory containing these children,
with following call-chain:
qdev_device_add()
object_property_set_bool('realized', true)
device_set_realized()
...
pci_qdev_realize() -> pci_ich9_ahci_realize() -> ahci_realize()
...
s->dev = g_new0(AHCIDevice, ports);
...
AHCIDevice *ad = &s->dev[i];
ide_bus_new(&ad->port, sizeof(ad->port), qdev, i, 1);
^^^ creates bus in memory allocated by above gnew()
and adds it as child propety to ahci device
...
hotplug_handler_plug(); -> goto post_realize_fail;
pci_qdev_unrealize() -> pci_ich9_uninit() -> ahci_uninit()
...
g_free(s->dev);
^^^ free memory that holds children busses
return with error from device_set_realized()
As result later when qdev_device_add() tries to unparent ich9-ahci
after failed device_set_realized(),
object_unparent() -> object_property_del_child()
iterates over existing QOM children including buses added by
ide_bus_new() and tries to unparent them, which causes access to
freed memory where they where located.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 1503938085-169486-1-git-send-email-imammedo@redhat.com
Signed-off-by: John Snow <jsnow@redhat.com>
It's not even clear what the interface REG and VAL32 were supposed to mean.
All uses had REG = 0 and VAL32 was the bitset assigned to the destination.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
This fixes building for ppc64 on ppc32 (changed in 5964fca8a1):
tcg/ppc/tcg-target.inc.c: In function 'tb_target_set_jmp_target':
include/qemu/compiler.h:86:30: error: static assertion failed: \
"not expecting: sizeof(*(uint64_t *)jmp_addr) > ATOMIC_REG_SIZE"
QEMU_BUILD_BUG_ON(sizeof(*ptr) > ATOMIC_REG_SIZE); \
^
tcg/ppc/tcg-target.inc.c:1377:9: note: in expansion of macro 'atomic_set'
atomic_set((uint64_t *)jmp_addr, pair);
^
Suggested-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170911204936.5020-1-f4bug@amsat.org>
[rth: Added commentary requested by pmm.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Python queue, 2017-09-15
# gpg: Signature made Sat 16 Sep 2017 00:14:01 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/python-next-pull-request:
qemu.py: include debug information on launch error
qemu.py: improve message on negative exit code
qemu.py: use os.path.null instead of /dev/null
qemu.py: avoid writing to stdout/stderr
qemu.py: fix is_running() return before first launch()
qtest.py: Few pylint/style fixes
qmp.py: Avoid overriding a builtin object
qmp.py: Avoid "has_key" usage
qmp.py: Use object-based class for QEMUMonitorProtocol
qmp.py: Couple of pylint/style fixes
qemu.py: Use custom exceptions rather than Exception
qemu.py: Simplify QMP key-conversion
qemu.py: Use iteritems rather than keys()
qemu|qtest: Avoid dangerous arguments
qemu.py: Pylint/style fixes
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When launching a VM, if an exception happens and the VM is not
initiated, it might be useful to see the qemu command line and
the qemu command output.
This patch creates that message. Notice that self._iolog needs to be
cleaned up in the beginning of the launch() to make sure we will not
expose the qemu log from a previous launch if the current one fails.
Signed-off-by: Amador Pahim <apahim@redhat.com>
Message-Id: <20170901112829.2571-6-apahim@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The current message shows 'self._args', which contains only part of the
options used in the Qemu command line.
This patch makes the qemu full args list an instance variable and then
uses it in the negative exit code message.
Message was moved outside the 'if is_running' block to make sure it will
be logged if the VM finishes before the call to shutdown().
Signed-off-by: Amador Pahim <apahim@redhat.com>
Message-Id: <20170901112829.2571-5-apahim@redhat.com>
[ehabkost: removed superfluous parenthesis]
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
This module should not write directly to stdout/stderr. Instead, it
should either raise exceptions or just log the messages and let the
callers handle them and decide what to do. For example, scripts could
choose to send the log messages stderr or/and write them to a file if
verbose or debugging mode is enabled.
This patch replaces the writes to stderr by an exception in the
send_fd_scm() when _socket_scm_helper is not set or not present. In the
same method, the subprocess Popen will now redirect the stdout/stderr to
logging.debug instead of writing to system stderr. As consequence, since
the Popen.communicate() is now used (in order to get the stdout), the
further call to wait() became redundant and was replaced by
Popen.returncode.
The shutdown() message on negative exit code will now be logged
to logging.warn instead of written to system stderr.
Signed-off-by: Amador Pahim <apahim@redhat.com>
Message-Id: <20170901112829.2571-3-apahim@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
is_running() returns None when called before the first time we
call launch():
>>> import qemu
>>> vm = qemu.QEMUMachine('qemu-system-x86_64')
>>> vm.is_running()
>>>
It should return False instead. This patch fixes that.
For consistence, this patch removes the parenthesis from the
second clause as it's not really needed.
Signed-off-by: Amador Pahim <apahim@redhat.com>
Message-Id: <20170901112829.2571-2-apahim@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The "id" is a builtin method to get object's identity and should not be
overridden. This might bring some issues in case someone was directly
calling "cmd(..., id=id)" but I haven't found such usage on brief search
for "cmd\(.*id=".
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170818142613.32394-10-ldoktor@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
No actual code changes, just initializing attributes earlier to avoid
AttributeError on early introspection, a few pylint/style fixes and
docstring clarifications.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170818142613.32394-7-ldoktor@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The naked Exception should not be widely used. It makes sense to be a
bit more specific and use better-suited custom exceptions. As a benefit
we can store the full reply in the exception in case someone needs it
when catching the exception.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170818142613.32394-6-ldoktor@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The list object is mutable in python and potentially might modify other
object's arguments when used as default argument. Reproducer:
>>> vm1 = QEMUMachine("qemu")
>>> vm2 = QEMUMachine("qemu")
>>> vm1._wrapper.append("foo")
>>> print vm2._wrapper
['foo']
In this case the `args` is actually copied so it would be safe to keep
it, but it's not a good practice to keep it. The same issue applies in
inherited qtest module.
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-Id: <20170818142613.32394-3-ldoktor@redhat.com>
Reviewed-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
pull-seccomp-20170915
# gpg: Signature made Fri 15 Sep 2017 09:21:15 BST
# gpg: using RSA key 0xDF32E7C0F0FFF9A2
# gpg: Good signature from "Eduardo Otubo (Senior Software Engineer) <otubo@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: D67E 1B50 9374 86B4 0723 DBAB DF32 E7C0 F0FF F9A2
* remotes/otubo/tags/pull-seccomp-20170915:
buildsys: Move seccomp cflags/libs to per object
seccomp: add resourcecontrol argument to command line
seccomp: add spawn argument to command line
seccomp: add elevateprivileges argument to command line
seccomp: add obsolete argument to command line
seccomp: changing from whitelist to blacklist
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ppc patch queue 2017-09-15
Here's the current batch of accumulated ppc patches. These are all
pretty simple bugfixes or cleanups, no big new features here.
# gpg: Signature made Fri 15 Sep 2017 04:50:00 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.11-20170915:
ppc/kvm: use kvm_vm_check_extension() in kvmppc_is_pr()
spapr_events: use QTAILQ_FOREACH_SAFE() in spapr_clear_pending_events()
spapr_cpu_core: cleaning up qdev_get_machine() calls
spapr_pci: don't create 64-bit MMIO window if we don't need to
spapr_pci: convert sprintf() to g_strdup_printf()
spapr_cpu_core: fail gracefully with non-pseries machine types
xics: fix several error leaks
vfio, spapr: Fix levels calculation
spapr_pci: handle FDT creation errors with _FDT()
spapr_pci: use the common _FDT() helper
spapr: fix CAS-generated reset
ppc/xive: fix OV5_XIVE_EXPLOIT bits
spapr: only update SDR1 once per-cpu during CAS
spapr_pci: use g_strdup_printf()
spapr_pci: drop useless check in spapr_populate_pci_child_dt()
spapr_pci: drop useless check in spapr_phb_vfio_get_loc_code()
hw/ppc/spapr.c: cleaning up qdev_get_machine() calls
net: Add SunGEM device emulation as found on Apple UniNorth
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Right now, function trace_event_set_vcpu_state_dynamic() asynchronously enables
events in the case a vCPU is executing TCG code. If the vCPU is being created
this makes some events like "guest_cpu_enter" to not be traced.
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Message-id: 150525662577.19850.13767570977540117247.stgit@frigg.lan
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Like many other libraries, libseccomp cflags and libs should only apply
to the building of necessary objects. Do so in the usual way with the
help of per object variables.
Signed-off-by: Fam Zheng <famz@redhat.com>
This patch adds [,resourcecontrol=deny] to `-sandbox on' option. It
blacklists all process affinity and scheduler priority system calls to
avoid any bigger of the process.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
This patch adds [,spawn=deny] argument to `-sandbox on' option. It
blacklists fork and execve system calls, avoiding Qemu to spawn new
threads or processes.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
This patch introduces the new argument
[,elevateprivileges=allow|deny|children] to the `-sandbox on'. It allows
or denies Qemu process to elevate its privileges by blacklisting all
set*uid|gid system calls. The 'children' option will let forks and
execves run unprivileged.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
This patch introduces the argument [,obsolete=allow] to the `-sandbox on'
option. It allows Qemu to run safely on old system that still relies on
old system calls.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
This patch changes the default behavior of the seccomp filter from
whitelist to blacklist. By default now all system calls are allowed and
a small black list of definitely forbidden ones was created.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
hmp() passes its string argument through the sprintf() family;
with a proper attribute, gcc -Wformat warns us when we do something
dangerous like passing a non-constant format string. Fortunately,
all our strings were safe, but checking whether the string can
contain an unintended % is easy to avoid and therefore worth doing.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Prior to commit 063c23d9, we were tracking a list of parallel
qtest objects, in order to safely clean up a SIGABRT handler
only after the last connection quits. But when we switched to
more of glib's infrastructure, the list became dead code that
is never assigned to.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Assertions should be separate from the side effects, since in
theory, g_assert() can be disabled (in practice, we can't really
ever do that).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Back when the test was introduced, in commit 62c39b307, the
test was set up to run qemu-ga directly on the host performing
the test, and defaults to limiting itself to safe commands. At
the time, it was envisioned that setting QGA_TEST_SIDE_EFFECTING
in the environment could cover a few more commands, while noting
the potential danger of those side effects running in the host.
But this has NEVER been tested: if you enable the environment
variable, the test WILL fail. One obvious reason: if you are not
running as root, you'll probably get a permission failure when
trying to freeze the file systems, or when changing system time.
Less obvious: if you run the test as root (wow, you're brave), you
could end up hanging if the test tries to log things to a
temporarily frozen filesystem. But the cutest reason of all: if
you get past the above hurdles, the test uses invalid JSON in
test_qga_fstrim() (missing '' around the dictionary key 'minimum'),
and will thus fail an assertion in qmp_fd().
Rather than leave this untested time-bomb in place, rip it out.
Hopefully, as originally envisioned, we can find an opportunity
to test an actual sandboxed guest where the guest-agent has
full permissions and will not unduly affect the host running
the test - if so, 'git revert' can be used if desired, for
salvaging any useful parts of this attempt.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Broken with commit b4ba67d9a7 ("libqos: Change PCI accessors to take
opaque BAR handle") a while ago, but nobody noticed since the tests are
not run by default: The msix_pba_bar is not correctly initialized
anymore if bir_pba has the same value as bir_table. With this fix,
"make check SPEED=slow" should work fine again.
Fixes: b4ba67d9a7
Tested-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The user can currently still cause an abort() if running certain tests
(like the prom-env-test) without setting the QTEST_QEMU_BINARY first.
A similar problem has been fixed with commit 7c933ad61b
already, but forgot to also take care of the qtest_get_arch() function,
so let's introduce a proper wrapper around getenv("QTEST_QEMU_BINARY")
that can be used in both locations now.
Buglink: https://bugs.launchpad.net/qemu/+bug/1713434
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
The problem with puv3 has been fixed with 0ac241bcf9
('unicore32: abort when entering "x 0" on the monitor') and the problem
with tricore_testboard has been fixed with b190f477e2
('qemu-system-tricore: segfault when entering "x 0" on the monitor').
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
A lot of tests provide code for adding and removing a device via the
device_add and device_del QMP commands. Maintaining this code in so many
places is cumbersome and error-prone (some of the code parts check the
responses for device deletion in an incorrect way, for example, we've got
to deal with both, error code and DEVICE_DEL event here). So let's provide
some proper generic functions for adding and removing a device instead.
The code for correctly unplugging a device has been taken from a patch
from Peter Xu.
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Peter Xu <peterx@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
If the host has both KVM PR and KVM HV loaded and we pass:
-machine pseries,accel=kvm,kvm-type=PR
the kvmppc_is_pr() returns false instead of true. Since the helper
is mostly used as fallback, it doesn't have any real impact with
recent kernels. A notable exception is the workaround to allow
migration between compatible hosts with different PVRs (eg, POWER8
and POWER8E), since KVM still doesn't provide a way to check if a
specific PVR is supported (see commit c363a37a45 for details).
According to the official KVM API documentation [1], KVM_PPC_GET_PVINFO
is "vm ioctl", but we check it as a global ioctl. The following function
in KVM is hence called with kvm == NULL and considers we're in HV mode.
int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
{
int r;
/* Assume we're using HV mode when the HV module is loaded */
int hv_enabled = kvmppc_hv_ops ? 1 : 0;
if (kvm) {
/*
* Hooray - we know which VM type we're running on. Depend on
* that rather than the guess above.
*/
hv_enabled = is_kvmppc_hv_enabled(kvm);
}
Let's use kvm_vm_check_extension() to fix the issue.
[1] https://www.kernel.org/doc/Documentation/virtual/kvm/api.txt
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QTAILQ_FOREACH_SAFE() must be used when removing the current element
inside the loop block.
This fixes a user-after-free error introduced by commit 5625817423
and reported by Coverity (CID 1381017).
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch removes the qdev_get_machine() calls that are made
in spapr_cpu_core.c in situations where we can get an existing
pointer for the MachineState by either passing it as an argument
to the function or by using other already available pointers.
Credits to Daniel Henrique Barboza for the idea and the changelog
text.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running a pseries-2.2 or older machine type, we get the following
lines in info mtree:
address-space: memory
...
ffffffffffffffff-ffffffffffffffff (prio 0, i/o): alias
pci@800000020000000.mmio64-alias @pci@800000020000000.mmio
ffffffffffffffff-ffffffffffffffff
address-space: cpu-memory
...
ffffffffffffffff-ffffffffffffffff (prio 0, i/o): alias
pci@800000020000000.mmio64-alias @pci@800000020000000.mmio
ffffffffffffffff-ffffffffffffffff
The same thing occurs when running a pseries-2.7 with
-global spapr-pci-host-bridge.mem_win_size=2147483648
This happens because we always create a 64-bit MMIO window, even if
we didn't explicitely requested it (ie, mem64_win_size == 0) and the
32-bit window is below 2GiB. It doesn't seem to have an impact on the
guest though because spapr_populate_pci_dt() doesn't advertise the
bogus windows when mem64_win_size == 0.
Since these memory regions don't induce any state, we can safely
choose to not create them when their address is equal to -1,
without breaking migration from existing setups.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since commit 7cca3e466e ("ppc: spapr: Move VCPU ID calculation into
sPAPR"), QEMU aborts when started with a *-spapr-cpu-core device and
a non-pseries machine.
Let's rely on the already existing call to object_dynamic_cast() instead
of using the SPAPR_MACHINE() macro.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If object_property_get_link() fails then it allocates an error, which
must be freed before returning. The error_get_pretty() function is
merely an accessor to the error message and doesn't free anything.
The error.h header indicates how to do it right:
* Pass an existing error to the caller with the message modified:
* error_propagate(errp, err);
* error_prepend(errp, "Could not frobnicate '%s': ", name);
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The existing tries to round up the number of pages but @pages is always
calculated as the rounded up value minus one which makes ctz64() always
return 0 and have create.levels always set 1.
This removes wrong "-1" and allows having more than 1 levels. This becomes
handy for >128GB guests with standard 64K pages as this requires blocks
with zone order 9 and the popular limit of CONFIG_FORCE_MAX_ZONEORDER=9
means that only blocks up to order 8 are allowed.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
libfdt failures when creating the FDT should cause QEMU to terminate.
Let's use the _FDT() macro which does just that instead of propagating
the error to the caller. spapr_populate_pci_child_dt() no longer needs
to return a value in this case.
Note that, on the way, this get rids of the following nonsensical lines:
g_assert(!ret);
if (ret) {
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
All other users in hw/ppc already consider an error when building
the FDT to be fatal, even on hotplug paths. There's no valid reason
for spapr_pci to behave differently. So let's used the common _FDT()
helper which terminates QEMU when libfdt fails.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The OV5_MMU_RADIX_300 requires special handling in the CAS negotiation
process. It is cleared from the option vector of the guest before
evaluating the changes and re-added later. But, when testing for a
possible CAS reset :
spapr->cas_reboot = spapr_ovec_diff(ov5_updates,
ov5_cas_old, spapr->ov5_cas);
the bit OV5_MMU_RADIX_300 will each time be seen as removed from the
previous OV5 set, hence generating a reset loop.
Fix this problem by also clearing the same bit in the ov5_cas_old set.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On POWER9, the Client Architecture Support (CAS) negotiation process
determines whether the guest operates in XIVE Legacy compatibility or
in XIVE exploitation mode. Now that we have initial guest support for
the XIVE interrupt controller, let's fix the bits definition which have
evolved in the latest specs.
The platform advertises the XIVE Exploitation Mode support using the
property "ibm,arch-vec-5-platform-support-vec-5", byte 23 bits 0-1 :
- 0b00 XIVE legacy mode Only
- 0b01 XIVE exploitation mode Only
- 0b10 XIVE legacy or exploitation mode
The OS asks for XIVE Exploitation Mode support using the property
"ibm,architecture-vec-5", byte 23 bits 0-1:
- 0b00 XIVE legacy mode Only
- 0b01 XIVE exploitation mode Only
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit b55d295e3e added the possibility to support HPT resizing with KVM.
In the case of PR, we need to pass the userspace address of the HPT to KVM
using the SDR1 slot.
This is handled by kvmppc_update_sdr1() which uses CPU_FOREACH() to update
all CPUs. It is hence not needed to call kvmppc_update_sdr1() for each CPU.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Building strings with g_strdup_printf() instead of snprintf() is
a QEMU common practice.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
spapr_phb_get_loc_code() either returns a non-null pointer, or aborts
if g_strdup_printf() failed to allocate memory.
Signed-off-by: Greg Kurz <groug@kaod.org>
[dwg: Grammatical fix to commit message]
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
g_strdup_printf() either returns a non-null pointer, or aborts if it
failed to allocate memory.
Signed-off-by: Greg Kurz <groug@kaod.org>
[dwg: Grammatical fix to commit message]
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch removes the qdev_get_machine() calls that are made in
spapr.c in situations where we can get an existing pointer for
the MachineState by either passing it as an argument to the function
or by using other already available pointers.
The following changes were made:
- spapr_node0_size: static function that is called two times:
at spapr_setup_hpt_and_vrma and ppc_spapr_init. In both cases we can
pass an existing MachineState pointer to it.
- spapr_build_fdt: MachineState pointer can be retrieved from
the existing sPAPRMachineState pointer.
- spapr_boot_set: the opaque in the first arg is a sPAPRMachineState
pointer as we can see inside ppc_spapr_init:
qemu_register_boot_set(spapr_boot_set, spapr);
We can get a MachineState pointer from it.
- spapr_machine_device_plug and spapr_machine_device_unplug_request: the
MachineState, sPAPRMachineState, MachineClass and sPAPRMachineClass pointers
can all be retrieved from the HotplugHandler pointer.
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This adds a simplistic emulation of the Sun GEM ethernet controller
found in Apple ASICs.
Currently we only support the Apple UniNorth 1.x variant, but the
other Apple or Sun variants should mostly be a matter of adding
PCI IDs options.
We have a very primitive emulation of a single Broadcom 5201 PHY
which is supported by the MacOS driver.
This model brings out-of-the-box networking to MacOS 9, and all
versions of OS X I tried with the mac99 platform.
Further improvements from Mark:
- Remove sungem.h file, moving constants into sungem.c as required
- Switch to using tracepoints for debugging
- Split register blocks into separate memory regions
- Use arrays in SunGEMState to hold register values
- Add state-saving support
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Previously when single stepping through ERET instruction via GDB
would result in debugger entering the "next" PC after ERET instruction.
When debugging in kernel mode, this will also cause unintended behavior,
because debugger will try to access memory from EL0 point of view.
Signed-off-by: Jaroslaw Pelczar <j.pelczar@samsung.com>
Message-id: 001c01d32895$483027f0$d89077d0$@samsung.com
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a machine level virtualization property. This defaults to false and can be
set to true using this machine command line argument:
-machine xlnx-zcu102,virtualization=on
This follows what the ARM virt machine does.
This property only applies to the ZCU102 machine. The EP108 machine does
not have this property.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add a machine level secure property. This defaults to false and can be
set to true using this machine command line argument:
-machine xlnx-zcu102,secure=on
This follows what the ARM virt machine does.
This property only applies to the ZCU102 machine. The EP108 machine does
not have this property.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In preperation for future work let's manually create the Xilnx machines.
This will allow us to set properties for the machines in the future.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The EP108 is a early access development board. Now that silicon is in
production people have access to the ZCU102. Let's rename the internal
QEMU files and variables to use the ZCU102.
There is no functional change here as the EP108 is still a valid board
option.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In the v7M and v8M ARM ARM, the magic exception return values are
referred to as EXC_RETURN values, and in QEMU we use V7M_EXCRET_*
constants to define bits within them. Rename the 'type' variable
which holds the exception return value in do_v7m_exception_exit()
to excret, making it clearer that it does hold an EXC_RETURN value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505137930-13255-8-git-send-email-peter.maydell@linaro.org
The exception-return magic values get some new bits in v8M, which
makes some bit definitions for them worthwhile.
We don't use the bit definitions for the switch on the low bits
which checks the return type for v7M, because this is defined
in the v7M ARM ARM as a set of valid values rather than via
per-bit checks.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1505137930-13255-7-git-send-email-peter.maydell@linaro.org
In several places we were unconditionally applying the
nvic_gprio_mask() to a priority value. This is incorrect
if the priority is one of the fixed negative priority
values (for NMI and HardFault), so don't do it.
This bug would have caused both NMI and HardFault to be
considered as the same priority and so NMI wouldn't
correctly preempt HardFault.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1505137930-13255-5-git-send-email-peter.maydell@linaro.org
HMP pull 2017-09-14
# gpg: Signature made Thu 14 Sep 2017 15:57:30 BST
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-hmp-20170914:
hmp: introduce 'info memory_size_summary' command
qmp: introduce query-memory-size-summary command
hmp: extend "info numa" with hotplugged memory information
tests/hmp: test "none" machine with memory
dump: do not dump non-existent guest memory
hmp: fix "dump-quest-memory" segfault (arm)
hmp: fix "dump-quest-memory" segfault (ppc)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It does not really make sense to dump memory that is not there.
Moreover, that fixes a segmentation fault when calling dump-guest-memory
with no filter for a machine with no memory defined.
New behaviour is:
(qemu) dump-guest-memory /dev/null
dump: no guest memory to dump
(qemu) dump-guest-memory /dev/null 0 4096
dump: no guest memory to dump
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Peter Xu <peterx@redhat.com>
Message-Id: <20170913142036.2469-4-lvivier@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Running QEMU with
qemu-system-aarch64 -M none -nographic -m 256
and executing
dump-guest-memory /dev/null 0 8192
results in segfault
Fix by checking if we have CPU, and exit with
error if there is no CPU:
(qemu) dump-guest-memory /dev/null
this feature or command is not currently supported
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Message-Id: <20170913142036.2469-3-lvivier@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Running QEMU with
qemu-system-ppc64 -M none -nographic -m 256
and executing
dump-guest-memory /dev/null 0 8192
results in segfault
Fix by checking if we have CPU, and exit with
error if there is no CPU:
(qemu) dump-guest-memory /dev/null
this feature or command is not currently supported
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <20170913142036.2469-2-lvivier@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Nowdays we use libusb for usb-host, so we don't have different code
for linux vs. bsd any more. So there is little reason to have the
HOST_USB variable, we can just write things directly into the Makefile
and avoid a pointless indirection.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 20170908111217.21985-2-kraxel@redhat.com
The existing XHCI code reads the Event Ring Segment Table Base Address
Register (ERSTBA) every time when it is changed. However zero is its
default state so one would think that zero there means it is not in use.
This adds a check for ERSTBA in addition to the existing check for
the Event Ring Segment Table Size Register (ERSTSZ).
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Message-id: 20170911065606.40600-1-aik@ozlabs.ru
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Some termcaps (found using SLES11SP1) use [? sequences. According to man
console_codes (http://linux.die.net/man/4/console_codes) the question mark
is a nop and should simply be ignored.
This patch does exactly that, rendering screen output readable when
outputting guest serial consoles to the graphical console emulator.
Signed-off-by: Alexander Graf <agraf@suse.de>
Message-id: 20170829113818.42482-1-agraf@suse.de
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
virtio-gpu can trigger the assert added by commit "6905b93447 console:
add same surface replace pre-condition" in multihead setups (where
surface can be NULL for secondary displays). Allow surface being NULL.
Fixes: 6905b93447
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20170906142109.2685-1-kraxel@redhat.com
Remove pixman switches from configure, should not be needed any more,
configure can figure by itself whenever pixman is needed or not.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170905140116.28181-3-kraxel@redhat.com
Drop pixman submodule and support for the "internal" pixman build.
pixman should be reasonably well established meanwhile so we don't
need the fallback submodule any more. While being at it also drop
some #ifdefs for pixman versions older than what we require in
configure anyway.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170905140116.28181-2-kraxel@redhat.com
Don't reset window layout information (passed via virtio_gpu_ui_info) on
device reset, so the user interface window layout will be kept intact
over reboots. The head size and position was commented out already, so
this patch just drops the dead code. Additionally the enabled head mask
must be kept so multihead setups work properly too.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1460595
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-id: 20170906142058.2460-1-kraxel@redhat.com
GCC 4.7.2 on SunOS reports that the values assigned to array members are not
real constants:
target/m68k/fpu_helper.c:32:5: error: initializer element is not constant
target/m68k/fpu_helper.c:32:5: error: (near initialization for 'fpu_rom[0]')
rules.mak:66: recipe for target 'target/m68k/fpu_helper.o' failed
Convert the array to make_floatx80_init() to fix it.
Replace floatx80_pi-like constants with make_floatx80_init() as they are
defined as make_floatx80().
This fixes build on SmartOS (Joyent).
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170904212306.3020-1-n54@gmx.com>
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
ppc patch queue 2017-09-08
This is the first batch of ppc related patches for qemu-2.11, and it's
accumulated quite a few things. Includes:
* A cleanup to handling of ppc cpu models from Igor
* First parts of fixes to handling of guest vs. host SMT modes from
Sam Bobroff
* Preliminary patches towards supporting the Sam460 board from
Balaton Zoltan
* Several fixes for hotplug logic
* Assorted other fixes and cleanups
# gpg: Signature made Fri 08 Sep 2017 06:28:42 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.11-20170908: (40 commits)
ppc: spapr: Move VCPU ID calculation into sPAPR
ppc: remove non implemented cpu models
ppc: drop caching ObjectClass from PowerPCCPUAlias
ppc: simplify cpu model lookup by PVR
ppc: replace inter-function cyclic dependency/recurssion with 2 simple lookups
ppc: make cpu alias point only to real cpu models
ppc: make cpu_model translation to type consistent
ppc: use macros to make cpu type name from string literal
target/ppc: Remove old STATUS file
PPC: KVM: Support machine option to set VSMT mode
spapr: fallback to raw mode if best compat mode cannot be set during CAS
hw/nvram/spapr_nvram: Device can not be created by the users
hw/ppc/spapr_cpu_core: Add a proper check for spapr machine
ppc4xx: Export ECB and PLB emulation
ppc4xx_i2c: Move to hw/i2c
ppc4xx_i2c: QOMify
ppc4xx: Split off 4xx I2C emulation from ppc405_uc to its own file
ppc4xx: Make MAL emulation more generic
ppc4xx: Move MAL from ppc405_uc to ppc4xx_devs
spapr_iommu: Realloc guest visible TCE table when hot(un)plugging vfio-pci
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The callback is called on select.
Furthermore, the next patch introduced a new callback, so rename the
function type with a generic name.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Add a new slot_reserved_mask bitmask to PCIBus indicating whether or not each
PCI slot on the bus is reserved. Ensure that it is initialised to zero to
maintain the existing behaviour that all slots are available by default, and
add the additional check with appropriate error reporting to
do_pci_register_device().
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This compat property sole function is to prevent the device from being
instantiated. Instead of requiring an extra compat property, check if
fw_cfg has DMA enabled.
fw_cfg is a built-in device that is initialized very early by the
machine init code. We have at least one other device that also
assumes fw_cfg_find() can be safely used on realize: pvpanic.
This has the additional benefit of handling other cases properly, like:
$ qemu-system-x86_64 -device vmgenid -machine none
qemu-system-x86_64: -device vmgenid: vmgenid requires DMA write support in fw_cfg, which this machine type does not provide
$ qemu-system-x86_64 -device vmgenid -machine pc-i440fx-2.9 -global fw_cfg.dma_enabled=off
qemu-system-x86_64: -device vmgenid: vmgenid requires DMA write support in fw_cfg, which this machine type does not provide
$ qemu-system-x86_64 -device vmgenid -machine pc-i440fx-2.6 -global fw_cfg.dma_enabled=on
[boots normally]
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Ben Warren <ben@skyportsystems.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Moved vmgenid from uncategorized to misc category in QEMU help menu
Signed-off-by: Yoni Bettan <ybettan@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
In vtd_switch_address_space() we did the memory region switch, however
it's possible that the caller of it has not taken the BQL at all. Make
sure we have it.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Jason Wang <jasowang@redhat.com>
CC: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
To enable hotplugging of a newly created pcie-pci-bridge,
we need to tell firmware (e.g. SeaBIOS) to reserve
additional buses or IO/MEM/PREF space for pcie-root-port.
Additional bus reservation allows us to hotplug pcie-pci-bridge into this root port.
The number of buses and IO/MEM/PREF space to reserve are provided to the device via
a corresponding property, and to the firmware via new PCI capability.
The properties' default values are -1 to keep default behavior unchanged.
Signed-off-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
On PCI init PCI bridges may need some extra info about bus number,
IO, memory and prefetchable memory to reserve. QEMU can provide this
with a special vendor-specific PCI capability.
Signed-off-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Introduce a new PCIExpress-to-PCI Bridge device,
which is a hot-pluggable PCI Express device and
supports devices hot-plug with SHPC.
This device is intended to replace the DMI-to-PCI Bridge.
Signed-off-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This reverts commit 153eba4726.
This patch prevents PCI passthrough hotplug on Xen. Even if the Xen tool
stack prepares its own ACPI tables, we still rely on QEMU for hotplug
ACPI notifications.
The original issue is fixed by the two previous patch:
hw/acpi: Limit hotplug to root bus on legacy mode
hw/acpi: Move acpi_set_pci_info to pcihp
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
HW part of ACPI PCI hotplug in QEMU depends on ACPI_PCIHP_PROP_BSEL
being set on a PCI bus that supports ACPI hotplug. It should work
regardless of the source of ACPI tables (QEMU generator/legacy SeaBIOS/Xen).
So move ACPI_PCIHP_PROP_BSEL initialization into HW ACPI implementation
part from QEMU's ACPI table generator.
To do PCI passthrough with Xen, the property ACPI_PCIHP_PROP_BSEL needs
to be set, but this was done only when ACPI tables are built which is
not needed for a Xen guest. The need for the property starts with commit
"pc: pcihp: avoid adding ACPI_PCIHP_PROP_BSEL twice"
(f0c9d64a68).
Adding find_i440fx into stubs so that mips-softmmu target can be built.
Reported-by: Sander Eikelenboom <linux@eikelenboom.it>
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vhost registers a MemoryListener where it adds and removes references
to MemoryRegions as the MemoryRegionSections pass through. The
region_add callback is invoked for each existing section when the
MemoryListener is registered, but unregistering the MemoryListener
performs no reciprocal region_del callback. It's therefore the
owner of the MemoryListener's responsibility to cleanup any persistent
changes, such as these memory references, after unregistering.
The consequence of this bug is that if we have both a vhost device
and a vfio device, the vhost device will reference any mmap'd MMIO of
the vfio device via this MemoryListener. If the vhost device is then
removed, those references remain outstanding. If we then attempt to
remove the vfio device, it never gets finalized and the only way to
release the kernel file descriptors is to terminate the QEMU process.
Fixes: dfde4e6e1a ("memory: add ref/unref calls")
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: qemu-stable@nongnu.org # v1.6.0+
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 08 Sep 2017 03:00:34 BST
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
colo-compare: Update the COLO document to add the IOThread configuration
colo-compare: Use IOThread to Check old packet regularly and Process pactkets of the primary
qemu-iothread: IOThread supports the GMainContext event loop
net/colo-compare.c: Fix comments and scheme
net/colo-compare.c: Adjust net queue pop order for performance
net/colo-compare.c: Optimize unpredictable tcp options comparison
e1000: Rename the SEC symbol to SEQEC
net/socket: Improve -net socket error reporting
net/net: Convert parse_host_port() to Error
net/socket: Convert several helper functions to Error
net/socket: Don't treat odd socket type as SOCK_STREAM
MAINTAINERS: Update mail address for COLO Proxy
net: rtl8139: do not use old_mmio accesses
net/rocker: Fix the unusual macro name
net/rocker: Convert to realize()
net/rocker: Plug memory leak in pci_rocker_init()
net/rocker: Remove the dead error handling
net/filter-rewriter.c: Fix rewirter checksum bug when use virtio-net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
TCG constant pools
# gpg: Signature made Thu 07 Sep 2017 23:35:45 BST
# gpg: using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tcg-20170907: (23 commits)
tcg/ppc: Use constant pool for movi
tcg/ppc: Look for shifted constants
tcg/ppc: Change TCG_REG_RA to TCG_REG_TB
tcg/arm: Use constant pool for call
tcg/arm: Use constant pool for movi
tcg/arm: Extract INSN_NOP
tcg/arm: Code rearrangement
tcg/arm: Tighten tlb indexing offset test
tcg/arm: Improve tlb load for armv7
tcg/sparc: Use constant pool for movi
tcg/sparc: Introduce TCG_REG_TB
tcg/aarch64: Use constant pool for movi
tcg/s390: Use constant pool for cmpi
tcg/s390: Use constant pool for xori
tcg/s390: Use constant pool for ori
tcg/s390: Use constant pool for andi
tcg/s390: Use constant pool for movi
tcg/s390: Fix sign of patch_reloc addend
tcg/s390: Introduce TCG_REG_TB
tcg/i386: Store out-of-range call targets in constant pool
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Conversion to TranslatorOps
# gpg: Signature made Thu 07 Sep 2017 19:42:48 BST
# gpg: using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-pa-20170907:
target/hppa: Convert to TranslatorOps
target/hppa: Convert to DisasContextBase
target/hppa: Convert to DisasJumpType
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Queued target/alpha patches
# gpg: Signature made Thu 07 Sep 2017 19:17:22 BST
# gpg: using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-axp-20170907:
target/alpha: Switch to do_transaction_failed() hook
target/alpha: Convert to TranslatorOps
target/alpha: Convert to DisasContextBase
target/alpha: Convert to DisasJumpType
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Remove the task which check old packet in the comparing thread,
then use IOthread context timer to handle it.
Process pactkets in the IOThread which arrived over the socket.
we use iothread_get_g_main_context to create a new g_main_loop in
the IOThread.then the packets from the primary and the secondary
are processed in the IOThread.
Finally remove the colo-compare thread using the IOThread instead.
Reviewed-by: Zhang Chen<zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Wang Yong <wang.yong155@zte.com.cn>
Signed-off-by: Wang Guang <wang.guang55@zte.com.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
IOThread uses AioContext event loop and does not run a GMainContext.
Therefore,chardev cannot work in IOThread,such as the chardev is
used for colo-compare packets reception.
This patch makes the IOThread run the GMainContext event loop,
chardev and IOThread can work together.
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Wang Yong <wang.yong155@zte.com.cn>
Signed-off-by: Wang Guang <wang.guang55@zte.com.cn>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The packet_enqueue() use g_queue_push_tail() to
enqueue net packet, so it is more efficent way use
g_queue_pop_head() to get packet for compare.
That will improve the success rate of comparison.
In my test the performance of ftp put 1000M file
will increase 10%
Signed-off-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
When network is busy, some tcp options(like sack) will unpredictable
occur in primary side or secondary side. it will make packet size
not same, but the two packet's payload is identical. colo just
care about packet payload, so we skip the option field.
Signed-off-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
SunOS defines SEC in <sys/time.h> as 1 (commonly used time symbols).
This fixes build on SmartOS (Joyent).
Patch cherry-picked from pkgsrc by jperkin (Joyent).
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
Reviewed-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
When -net socket fails, it first reports a specific error, then
a generic one, like this:
$ ./x86_64-softmmu/qemu-system-x86_64 -net socket,mcast=230.0.0.1:1234,listen
qemu-system-x86_64: -net socket,mcast=230.0.0.1:1234,listen: exactly one of listen=, connect=, mcast= or udp= is required
qemu-system-x86_64: -net socket,mcast=230.0.0.1:1234,listen: Device 'socket' could not be initialized
Convert net_socket_*_init() to Error to get rid of the superfluous second
error message. After the patch, the effect like this:
$ ./x86_64-softmmu/qemu-system-x86_64 -net socket,mcast=230.0.0.1:1234,listen
qemu-system-x86_64: -net socket,mcast=230.0.0.1:1234,listen: exactly one of listen=, connect=, mcast= or udp= is requireda
This also fixes a few silent failures to report an error.
Cc: jasowang@redhat.com
Cc: armbru@redhat.com
Cc: berrange@redhat.com
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
In net_socket_fd_init(), the 'default' case is odd: it warns,
then continues as if the socket type was SOCK_STREAM. The
comment explains "this could be a eg. a pty", but that makes
no sense. If @fd really was a pty, getsockopt() would fail
with ENOTSOCK. If @fd was a socket, but neither SOCK_DGRAM nor
SOCK_STREAM. It should not be treated as if it was SOCK_STREAM.
Turn this case into an Error. If there is a genuine reason to
support something like SOCK_RAW, it should be explicitly
handled.
Cc: jasowang@redhat.com
Cc: armbru@redhat.com
Cc: berrange@redhat.com
Cc: armbru@redhat.com
Cc: eblake@redhat.com
Suggested-by: Markus Armbruster <armbru@redhat.com>
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
My Fujitsu mail account will be disabled soon, update the mail info
to my private mail.
Signed-off-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Both io and memory use the same mmio functions in the rtl8139 device.
This patch removes the separate MemoryRegionOps and old_mmio accessors
for memory, and replaces it with an alias to the io memory region.
Signed-off-by: Matt Parker <mtparkr@gmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
The rocker device still implements the old PCIDeviceClass .init()
instead of the new .realize(). All devices need to be converted to
.realize().
.init() reports errors with fprintf() and return 0 on success, negative
number on failure. Meanwhile, when -device rocker fails, it first report
a specific error, then a generic one, like this:
$ x86_64-softmmu/qemu-system-x86_64 -device rocker,name=qemu-rocker
rocker: name too long; please shorten to at most 9 chars
qemu-system-x86_64: -device rocker,name=qemu-rocker: Device initialization failed
Now, convert it to .realize() that passes errors to its callers via its
errp argument. Also avoid the superfluous second error message. After
the patch, effect like this:
$ x86_64-softmmu/qemu-system-x86_64 -device rocker,name=qemu-rocker
qemu-system-x86_64: -device rocker,name=qemu-rocker: name too long; please shorten to at most 9 chars
Cc: jasowang@redhat.com
Cc: jiri@resnulli.us
Cc: armbru@redhat.com
Cc: f4bug@amsat.org
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Memory allocation functions like world_alloc, desc_ring_alloc etc,
they are all wrappers around g_malloc, g_new etc. But g_malloc and
similar functions doesn't return null. Because they ignore the fact
that g_malloc() of 0 bytes returns null. So error checks for these
allocation failure are superfluous. Now, remove them entirely.
Cc: jasowang@redhat.com
Cc: jiri@resnulli.us
Cc: armbru@redhat.com
Cc: f4bug@amsat.org
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Move the calculation of a CPU's VCPU ID out of the generic PPC code
(ppc_cpu_realizefn()) and into sPAPR specific code
(spapr_cpu_core_realize()) where it belongs.
Unfortunately, due to the way things are ordered, we still need to
default the VCPU ID in ppc_cpu_realizfn() but at least doing that
doesn't require any interaction with sPAPR.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Remove cpu models that aren't implemented and are not
compiled/tested since they are under TODO ifdef
which isn't defined in sources.
If someone really needs a removed model he/she should add
as regular one with corresponding implementation.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Caching there practically doesn't give any benefits
and that at slow path druring querying supported CPU list.
But it introduces non conventional path of where from
comes used CPU type name (kvm_ppc_register_host_cpu_type).
Taking in account that kvm_ppc_register_host_cpu_type()
fixes up models the aliases point to, it's sufficient to
make ppc_cpu_class_by_name() translate cpu alias to
correct cpu type name.
So drop PowerPCCPUAlias::oc field + ppc_cpu_class_by_alias()
and let ppc_cpu_class_by_name() do conversion to cpu type name,
which simplifies code a little bit saving ~20LOC and trouble
wondering why ppc_cpu_class_by_alias() is necessary.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
previous patches cleaned up cpu model/alias naming which
allows to simplify cpu model/alias to cpu type lookup a bit
byt removing recurssion and dependency of ppc_cpu_class_by_name() /
ppc_cpu_class_by_alias() on each other.
Besides of simplifying code it reduces it by ~15LOC.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
alias pointing to another alias forces lookup code to
do recurrsive translation till real cpu model is reached.
Drop this nonsence and make each alias point to cpu model
that has corresponding CPU type. It will allow to drop
recurrsion in cpu model translation code and actually
make ppc_cpu_aliases[] content use PowerPCCPUAlias
fields properly
(i.e. alias goes into .alias and model goes into .model)
While at it add TODO defines around aliases that point to
cpu models excluded by the same TODO defines.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
PPC handles -cpu FOO rather incosistently,
i.e. it does case-insensitive matching of FOO to
a CPU type (see: ppc_cpu_compare_class_name) but
handles alias names as case-sensitive, as result:
# qemu-system-ppc64 -M mac99 -cpu g3
qemu-system-ppc64: unable to find CPU model ' kN�U'
# qemu-system-ppc64 -cpu 970MP_V1.1
qemu-system-ppc64: Unable to find sPAPR CPU Core definition
while
# qemu-system-ppc64 -M mac99 -cpu G3
# qemu-system-ppc64 -cpu 970MP_v1.1
start up just fine.
Considering we can't take case-insensitive matching away,
make it case-insensitive for all alias/type/core_type
lookups.
As side effect it allows to remove duplicate core types
which are the same except of using different cased letters in name.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Replace
"-" TYPE_POWERPC_CPU
when composing cpu type name from cpu model string literal
and the same pattern in format strings with
POWERPC_CPU_TYPE_SUFFIX and POWERPC_CPU_TYPE_NAME(model)
macroses like we do in x86.
Later POWERPC_CPU_TYPE_NAME() will be used to define default
cpu type per machine type and as bonus it will be consistent
and easy grep-able pattern across all other targets that I'm
plannig to treat the same way.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The target/ppc/STATUS file has seen its last real update 10 years
ago - so the information in there is not up to date anymore. Since
nobody seems to care about this file, let's simply remove it.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
KVM now allows writing to KVM_CAP_PPC_SMT which has previously been
read only. Doing so causes KVM to act, for that VM, as if the host's
SMT mode was the given value. This is particularly important on Power
9 systems because their default value is 1, but they are able to
support values up to 8.
This patch introduces a way to control this capability via a new
machine property called VSMT ("Virtual SMT"). If the value is not set
on the command line a default is chosen that is, when possible,
compatible with legacy systems.
Note that the intialization of KVM_CAP_PPC_SMT has changed slightly
because it has changed (in KVM) from a global capability to a
VM-specific one. This won't cause a problem on older KVMs because VM
capabilities fall back to global ones.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
KVM PR doesn't allow to set a compat mode. This causes ppc_set_compat_all()
to fail and we return H_HARDWARE to the guest right away.
This is excessive: even if we favor compat mode since commit 152ef803ce,
we should at least fallback to raw mode if the guest supports it.
This patch modifies cas_check_pvr() so that it also reports that the real
PVR was found in the table supplied by the guest. Note that this is only
makes sense if raw mode isn't explicitely disabled (ie, the user didn't
set the machine "max-cpu-compat" property). If this is the case, we can
simply ignore ppc_set_compat_all() failures, and let the guest run in raw
mode.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Trying to add a spapr-nvram device currently aborts QEMU like this:
$ ppc64-softmmu/qemu-system-ppc64 -device spapr-nvram
qemu-system-ppc64: hw/ppc/spapr_rtas.c:407: spapr_rtas_register:
Assertion `!rtas_table[token].name' failed.
Aborted (core dumped)
This NVRAM device registers RTAS calls during its realize function
and thus can only be used once - and that's internally from spapr.c.
So let's mark the device with user_creatable = false to avoid that
the users can crash their QEMU this way.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QEMU currently crashes when the user tries to add a spapr-cpu-core
on a non-pseries machine:
$ qemu-system-ppc64 -S -machine ppce500,accel=tcg \
-device POWER5+_v2.1-spapr-cpu-core
hw/ppc/spapr_cpu_core.c:178:spapr_cpu_core_realize_child:
Object 0x55cee1f55160 is not an instance of type spapr-machine
Aborted (core dumped)
So let's add a proper check for the correct machine time with
a more friendly error message here.
Reported-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Make these device models available outside ppc405_uc.c for reuse in
460EX emulation. They are left in their current place for now because
they are used mostly unchanged and I'm not sure these correctly model
the components in 440 SoCs (but they seem to be good enough). These
functions could be moved in a subsequent clean up series when this is
confirmed.
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This device appears in other SoCs as well not just in 405 ones and
subsequent patches will modify it, so move it out of ppc405_uc.c in
preparation
Signed-off-by: BALATON Zoltan <balaton@eik.bme.hu>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This replaces g_malloc() with spapr_tce_alloc_table() as this is
the standard way of allocating tables and this allows moving the table
back to KVM when unplugging a VFIO PCI device and VFIO TCE acceleration
support is not present in the KVM.
Although spapr_tce_alloc_table() is expected to fail with EBUSY
if called when previous fd is not closed yet, in practice we will not
see it because cap_spapr_vfio is false at the moment.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Some OS don't populate the TSIZE field when using a fixed size TLB which result
in a 1KB TLB. When the TLB is a fixed size TLB the TSIZE field should be
ignored.
Fix this wrong behavior with MAV 2.0.
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This fixes booke206_tlbnps for MAV 2.0 by checking the MMUCFG register and
return directly the right tlbnps instead of computing it from non existing
field.
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The concept of a VCPU ID that differs from the CPU's index
(cpu->cpu_index) exists only within SPAPR machines so, move the
functions ppc_get_vcpu_id() and ppc_get_cpu_by_vcpu_id() into spapr.c
and rename them appropriately.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This field actually records the VCPU ID used by KVM and, although the
value is also used in the device tree it is primarily the VCPU ID so
rename it as such.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
[dwg: Updated comment missed in cpu.h]
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The e500 platform code uses the function ppc_get_vcpu_dt_id() to get
an id to put in its device tree. Which seems like it makes sense, but
ppc_get_vcpu_dt_id() is actually badly named - it only differs from
cpu_index in cases where you're running on KVM HV and the host's
number of threads differs from the guests. Since KVM HV only supports
PAPR, not e500, it doesn't make sense to use it here.
Simply use the cpu_index instead (which is 'i' in this context
because qemu_get_cpu(i) returns the cpu with cpu_index == i).
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
[dwg: Rewrote commit message]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When hot-unplugging a PHB, all its PCI DRC connectors get unrealized. This
patch adds an unrealize method to the physical DRC class, in order to undo
registrations performed in realize_physical().
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This memory region should be owned by the PHB. This ensures the PHB
cannot be finalized as long as the the region is guest visible, or
used by a CPU or a device.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Passing a stack allocated buffer of arbitrary length to snprintf()
without checking the return value can cause the resultant strings
to be silently truncated.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Passing a stack allocated buffer of arbitrary length to snprintf()
without checking the return value can cause the resultant strings
to be silently truncated.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Passing a null priority to memory_region_add_subregion_overlap() is
strictly equivalent to calling memory_region_add_subregion().
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch is a follow up on the discussions made in patch
"hw/ppc: disable hotplug before CAS is completed" that can be
found at [1].
At this moment, we do not support CPU/memory hotplug in early
boot stages, before CAS. When a hotplug occurs, the event is logged
in an internal RTAS event log queue and an IRQ pulse is fired. In
regular conditions, the guest handles the interrupt by executing
check_exception, fetching the generated hotplug event and enabling
the device for use.
In early boot, this IRQ isn't caught (SLOF does not handle hotplug
events), leaving the event in the rtas event log queue. If the guest
executes check_exception due to another hotplug event, the re-assertion
of the IRQ ends up de-queuing the first hotplug event as well. In short,
a device hotplugged before CAS is considered coldplugged by SLOF.
This leads to device misbehavior and, in some cases, guest kernel
Ooops when trying to unplug the device.
A proper fix would be to turn every device hotplugged before CAS
as a colplugged device. This is not trivial to do with the current
code base though - the FDT is written in the guest memory at
ppc_spapr_reset and can't be retrieved without adding extra state
(fdt_size for example) that will need to managed and migrated. Adding
the hotplugged DT in the middle of CAS negotiation via the updated DT
tree works with CPU devs, but panics the guest kernel at boot. Additional
analysis would be necessary for LMBs and PCI devices. There are
questions to be made in QEMU/SLOF/kernel level about how we can make
this change in a sustainable way.
With Linux guests, a fix would be the kernel executing check_exception
at boot time, de-queueing the events that happened in early boot and
processing them. However, even if/when the newer kernels start
fetching these events at boot time, we need to take care of older
kernels that won't be doing that.
This patch works around the situation by issuing a CAS reset if a hotplugged
device is detected during CAS:
- the DRC conditions that warrant a CAS reset is the same as those that
triggers a DRC migration - the DRC must have a device attached and
the DRC state is not equal to its ready_state. With that in mind, this
patch makes use of 'spapr_drc_needed' to determine if a CAS reset
is needed.
- In the middle of CAS negotiations, the function
'spapr_hotplugged_dev_before_cas' goes through all the DRCs to see
if there are any DRC that requires a reset, using spapr_drc_needed. If
that happens, returns '1' in 'spapr_h_cas_compose_response' which will set
spapr->cas_reboot to true, causing the machine to reboot.
No changes are made for coldplug devices.
[1] http://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg02855.html
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The sPAPR machine isn't clearing up the pending events QTAILQ on
machine reboot. This allows for unprocessed hotplug/epow events
to persist in the queue after reset and, when reasserting the IRQs in
check_exception later on, these will be being processed by the OS.
This patch implements a new function called 'spapr_clear_pending_events'
that clears up the pending_events QTAILQ. This helper is then called
inside ppc_spapr_reset to clear up the events queue, preventing
old/deprecated events from persisting after a reset.
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This patch makes a small fix in 'spapr_drc_needed' to change how we detect
if a DRC has a device attached. Previously it used dr_entity_sense for this,
which works for physical DRCs.
However, for logical DRCs, it didn't cover the case where a logical DRC has
a drc->dev but the state is LOGICAL_UNUSABLE (e.g. a hotplugged CPU before
CAS). In this case, the dr_entity_sense of this DRC returns UNUSABLE and the
code was considering that there were no dev attached, making spapr_drc_needed
return 'false' when in fact we would like to migrate the DRC.
Changing it to check for drc->dev instead works for all DRC types.
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
At this point the conversion is a wash. Loading of TB+ofs is
smaller, but the actual return address from exit_tb is larger.
There are a few more insns required to transition between TBs.
But the expectation is that accesses to the constant pool will
on the whole be smaller.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Move constants before all of the functions.
Move tcg_out_<format> functions before all
of the others. No functional change.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We are not going to use ldrd for loading the comparator
for 32-bit guests, so don't limit cmp_off to 8 bits then.
This eliminates one insn in the tlb load for some guests.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use UBFX to avoid limitation on CPU_TLB_BITS. Since we're dropping
the initial shift, we need to replace the page masking. We can use
MOVW+BIC to do this without shifting. The result is the same size
as the armv6 path with one less conditional instruction.
Signed-off-by: Richard Henderson <rth@twiddle.net>
We were passing in -2 instead of +2, but then ignoring
the actual contents of addend in the calculation.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Already it saves 2 bytes per call, but also the constant pool
entry may well be shared across multiple calls.
Signed-off-by: Richard Henderson <rth@twiddle.net>
A new shared header tcg-pool.inc.c adds new_pool_label,
for registering a tcg_target_ulong to be emitted after
the generated code, plus relocation data to install a
pointer to the data.
A new pointer is added to the TCGContext, so that we
dump the constant pool as data, not code.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Dispense with TCGBackendData, as it has never been used for more than
holding a single pointer. Use a define in the cpu/tcg-target.h to
signal requirement for TCGLabelQemuLdst, so that we can drop the no-op
tcg-be-null.h stubs. Rename tcg-be-ldst.h to tcg-ldst.inc.c.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump
boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional
function tb_target_set_jmp_target.
While we're touching all backends, add a parameter for tb->tc_ptr;
we're going to need it shortly for some backends.
Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.
This opens the possibility for TCG_TARGET_HAS_direct_jump to be
a runtime decision -- based on host cpu capabilities, the size of
code_gen_buffer, or a future debugging switch.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Switch the alpha target from the old unassigned_access hook
to the new do_transaction_failed hook. This allows us to
resolve a ??? in the old hook implementation.
The only part of the alpha target that does physical
memory accesses is reading the page table -- add a
TODO comment there to the effect that we should handle
bus faults on page table walks. (Since the palcode
doesn't actually do anything useful on a bus fault anyway
it's a bit moot for now.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1502196172-13818-1-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Nobody has mentioned AIX host support on the mailing list for years,
and we have no test systems for it so it is most likely broken.
We've advertised in configure for two releases now that we plan
to drop support for this host OS, and have had no complaints.
Drop the AIX host support code.
We can also drop the now-unused AIX version of sys_cache_info().
Note that the _CALL_AIX define used in the PPC tcg backend is
also used for Linux PPC64, and so that code should not be removed.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1504545540-8002-1-git-send-email-peter.maydell@linaro.org
nbd patches for 2017-09-06
- Daniel P. Berrange: [0/2] Fix / skip recent iotests with LUKS driver
- Eric Blake: [0/3] nbd: Use common read/write-all qio functions
# gpg: Signature made Wed 06 Sep 2017 16:17:55 BST
# gpg: using RSA key 0xA7A16B4A2527436A
# gpg: Good signature from "Eric Blake <eblake@redhat.com>"
# gpg: aka "Eric Blake (Free Software Programmer) <ebb9@byu.net>"
# gpg: aka "[jpeg image of size 6874]"
# Primary key fingerprint: 71C2 CC22 B1C4 6029 27D2 F3AA A7A1 6B4A 2527 436A
* remotes/ericb/tags/pull-nbd-2017-09-06:
nbd: Use new qio_channel_*_all() functions
io: Add new qio_channel_read{, v}_all_eof functions
io: Yield rather than wait when already in coroutine
iotests: blacklist 194 with the luks driver
iotests: rewrite 192 to use _launch_qemu to fix LUKS support
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
target-arm:
* cleanups converting to DEFINE_PROP_LINK
* allwinner-a10: mark as not user-creatable
* initial patches working towards ARMv8M support
* implement generating aborts on memory transaction failures
* make BXJ behave correctly (ie not UNDEF) on ARMv6-and-later
# gpg: Signature made Thu 07 Sep 2017 14:26:07 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170907: (31 commits)
target/arm: Add Jazelle feature
target/arm: Implement new do_transaction_failed hook
hw/arm: Set ignore_memory_transaction_failures for most ARM boards
boards.h: Define new flag ignore_memory_transaction_failures
target/arm: Implement BXNS, and banked stack pointers
target/arm: Move regime_is_secure() to target/arm/internals.h
target/arm: Make CFSR register banked for v8M
target/arm: Make MMFAR banked for v8M
target/arm: Make CCR register banked for v8M
target/arm: Make MPU_CTRL register banked for v8M
target/arm: Make MPU_RNR register banked for v8M
target/arm: Make MPU_RBAR, MPU_RLAR banked for v8M
target/arm: Make MPU_MAIR0, MPU_MAIR1 registers banked for v8M
target/arm: Make VTOR register banked for v8M
nvic: Add NS alias SCS region
target/arm: Make CONTROL register banked for v8M
target/arm: Make FAULTMASK register banked for v8M
target/arm: Make PRIMASK register banked for v8M
target/arm: Make BASEPRI register banked for v8M
target/arm: Add MMU indexes for secure v8M
...
# Conflicts:
# target/arm/translate.c
migration pull 2017-09-06
# gpg: Signature made Wed 06 Sep 2017 19:39:23 BST
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20170906a:
migration: dump str in migrate_set_state trace
snapshot/tests: Try loadvm twice
migration: Reset rather than destroy main_thread_load_event
runstate/migrate: Two more transitions
host-utils: Simplify pow2ceil()
host-utils: Proactively fix pow2floor(), switch to unsigned
xbzrle: Drop unused cache_resize()
migration: Report when bdrv_inactivate_all fails
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
tcg generic translate loop v15
# gpg: Signature made Wed 06 Sep 2017 17:02:31 BST
# gpg: using RSA key 0x64DF38E8AF7E215F
# gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* remotes/rth/tags/pull-tgt-20170906: (32 commits)
target/arm: Perform per-insn cross-page check only for Thumb
target/arm: Split out thumb_tr_translate_insn
target/arm: Move ss check to init_disas_context
target/arm: [a64] Move page and ss checks to init_disas_context
target/arm: [tcg] Port to generic translation framework
target/arm: [tcg,a64] Port to disas_log
target/arm: [tcg] Port to disas_log
target/arm: [tcg,a64] Port to tb_stop
target/arm: [tcg] Port to tb_stop
target/arm: [tcg,a64] Port to translate_insn
target/arm: [tcg] Port to translate_insn
target/arm: [tcg,a64] Port to breakpoint_check
target/arm: [tcg,a64] Port to insn_start
target/arm: [tcg] Port to insn_start
target/arm: [tcg] Port to tb_start
target/arm: [tcg,a64] Port to init_disas_context
target/arm: [tcg] Port to init_disas_context
target/arm: [tcg] Port to DisasContextBase
target/i386: [tcg] Port to generic translation framework
target/i386: [tcg] Port to disas_log
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This adds a feature bit indicating support of the (trivial) Jazelle
implementation if ARM_FEATURE_V6 is set or if the processor is arm926
or arm1026. This fixes the issue that any BXJ instruction will
result in an illegal_op. BXJ instructions will now check if the
architecture supports ARM_FEATURE_JAZELLE.
Signed-off-by: Portia Stephens <portia.stephens@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 20170905211232.11092-1-portia.stephens@xilinx.com
[PMM: edited commit message and comment text a bit]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Set the MachineClass flag ignore_memory_transaction_failures
for almost all ARM boards. This means they retain the legacy
behaviour that accesses to unimplemented addresses will RAZ/WI
rather than aborting, when a subsequent commit adds support
for external aborts.
The exceptions are:
* virt -- we know that guests won't try to prod devices
that we don't describe in the device tree or ACPI tables
* mps2 -- this board was written to use unimplemented-device
for all the ranges with devices we don't yet handle
New boards should not set the flag, but instead be written
like the mps2.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1504626814-23124-3-git-send-email-peter.maydell@linaro.org
For the Xilinx boards:
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Define a new MachineClass field ignore_memory_transaction_failures.
If this is flag is true then the CPU will ignore memory transaction
failures which should cause the CPU to take an exception due to an
access to an unassigned physical address; the transaction will
instead return zero (for a read) or be ignored (for a write). This
should be set only by legacy board models which rely on the old
RAZ/WI behaviour for handling devices that QEMU does not yet model.
New board models should instead use "unimplemented-device" for all
memory ranges where the guest will attempt to probe for a device that
QEMU doesn't implement and a stub device is required.
We need this for ARM boards, where we're about to implement support for
generating external aborts on memory transaction failures. Too many
of our legacy board models rely on the RAZ/WI behaviour and we
would break currently working guests when their "probe for device"
code provoked an external abort rather than a RAZ.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Message-id: 1504626814-23124-2-git-send-email-peter.maydell@linaro.org
Implement the BXNS v8M instruction, which is like BX but will do a
jump-and-switch-to-NonSecure if the branch target address has bit 0
clear.
This is the first piece of code which implements "switch to the
other security state", so the commit also includes the code to
switch the stack pointers around, which is the only complicated
part of switching security state.
BLXNS is more complicated than just "BXNS but set the link register",
so we leave it for a separate commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-21-git-send-email-peter.maydell@linaro.org
Make the CCR register banked if v8M security extensions are enabled.
This is slightly more complicated than the other "add banking"
patches because there is one bit in the register which is not
banked. We keep the live data in the NS copy of the register,
and adjust it on register reads and writes. (Since we don't
currently implement the behaviour that the bit controls, there
is nowhere else that needs to care.)
This patch includes the enforcement of the bits which are newly
RES1 in ARMv8M.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1503414539-28762-17-git-send-email-peter.maydell@linaro.org
Make the MPU registers MPU_MAIR0 and MPU_MAIR1 banked if v8M security
extensions are enabled.
We can freely add more items to vmstate_m_security without
breaking migration compatibility, because no CPU currently
has the ARM_FEATURE_M_SECURITY bit enabled and so this
subsection is not yet used by anything.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-14-git-send-email-peter.maydell@linaro.org
For v8M the range 0xe002e000..0xe002efff is an alias region which
for secure accesses behaves like a NonSecure access to the main
SCS region. (For nonsecure accesses including when the security
extension is not implemented, it is RAZ/WI.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1503414539-28762-11-git-send-email-peter.maydell@linaro.org
Make the FAULTMASK register banked if v8M security extensions are enabled.
Note that we do not yet implement the functionality of the new
AIRCR.PRIS bit (which allows the effect of the NS copy of FAULTMASK to
be restricted).
This patch includes the code to determine for v8M which copy
of FAULTMASK should be updated on exception exit; further
changes will be required to the exception exit code in general
to support v8M, so this is just a small piece of that.
The v8M ARM ARM introduces a notation where individual paragraphs
are labelled with R (for rule) or I (for information) followed
by a random group of subscript letters. In comments where we want
to refer to a particular part of the manual we use this convention,
which should be more stable across document revisions than using
section or page numbers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-9-git-send-email-peter.maydell@linaro.org
As the first step in implementing ARM v8M's security extension:
* add a new feature bit ARM_FEATURE_M_SECURITY
* add the CPU state field that indicates whether the CPU is
currently in the secure state
* add a migration subsection for this new state
(we will add the Secure copies of banked register state
to this subsection in later patches)
* add a #define for the one new-in-v8M exception type
* make the CPU debug log print S/NS status
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-4-git-send-email-peter.maydell@linaro.org
As part of ARMv8M, we need to add support for the PMSAv8 MPU
architecture.
PMSAv8 differs from PMSAv7 both in register/data layout (for instance
using base and limit registers rather than base and size) and also in
behaviour (for example it does not have subregions); rather than
trying to wedge it into the existing PMSAv7 code and data structures,
we define separate ones.
This commit adds the data structures which hold the state for a
PMSAv8 MPU and the register interface to it. The implementation of
the MPU behaviour will be added in a subsequent commit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1503414539-28762-2-git-send-email-peter.maydell@linaro.org
QEMU currently exits unexpectedly when the user accidentially
tries to do something like this:
$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add allwinner-a10
Unsupported NIC model: smc91c111
Exiting just due to a "device_add" should not happen. Looking closer
at the the realize and instance_init function of this device also
reveals that it is using serial_hds and nd_table directly there, so
this device is clearly not creatable by the user and should be marked
accordingly.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 1503416789-32080-1-git-send-email-thuth@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Rather than open-coding our own read/write-all functions, we
can make use of the recently-added qio code. It slightly
changes the error message in one of the iotests.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170905191114.5959-4-eblake@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Some callers want to distinguish between clean EOF (no bytes read)
vs. a short read (at least one byte read, but EOF encountered
before reaching the desired length), as it allows clients the
ability to do a graceful shutdown when a server shuts down at
defined safe points in the protocol, rather than treating all
shutdown scenarios as an error due to EOF. However, we don't want
to require all callers to have to check for early EOF. So add
another wrapper function that can be used by the callers that care
about the distinction.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170905191114.5959-3-eblake@redhat.com>
Acked-by: Daniel P. Berrange <berrange@redhat.com>
The new qio_channel_{read,write}{,v}_all functions are documented
as yielding until data is available. When used on a blocking
channel, this yield is done via qio_channel_wait() which spawns
a nested event loop under the hood (so it is that secondary loop
which yields as needed); but if we are already in a coroutine (at
which point QIO_CHANNEL_ERR_BLOCK is only possible if we are a
non-blocking channel), we want to yield the current coroutine
instead of spawning a nested event loop.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170905191114.5959-2-eblake@redhat.com>
Acked-by: Daniel P. Berrange <berrange@redhat.com>
[commit message updated]
Signed-off-by: Eric Blake <eblake@redhat.com>
ARM is a fixed-length ISA and we can compute the page crossing
condition exactly once during init_disas_context.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We need not check for ARM vs Thumb state in order to dispatch
disassembly of every instruction.
Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Since AArch64 uses a fixed-width ISA, we can pre-compute the number of
insns remaining on the page. Also, we can check for single-step once.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-Id: <150002073981.22386.9870422422367410100.stgit@frigg.lan>
[rth: Moved max_insns adjustment from tb_start to init_disas_context.
Removed pc_next return from translate_insn.
Removed tcg_check_temp_count from generic loop.
Moved gen_io_end to exactly match gen_io_start.
Use qemu_log instead of error_report for temporary leaks.
Moved TB size/icount assignments before disas_log.]
Signed-off-by: Richard Henderson <rth@twiddle.net>
There's nothing magic about the exception that we generate in order
to execute the magic kernel page. We can and should allow gdb to
set a breakpoint at this location.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Fold DISAS_EXC and DISAS_TB_JUMP into DISAS_NORETURN.
In both cases all following code is dead. In the first
case because we have exited the TB via exception; in the
second case because we have exited the TB via goto_tb
and its associated machinery.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This target is not sophisticated in its use of cleanups at the
end of the translation loop. For the most part, any condition
that exits the TB is dealt with by emitting the exiting opcode
right then and there. Therefore the only is_jmp indicator that
is needed is DISAS_NORETURN.
For two stack segment modifying cases, we have not yet exited
the TB (therefore DISAS_NORETURN feels wrong), but intend to exit.
The caller of gen_movl_seg_T0 currently checks for any non-zero
value, therefore DISAS_TOO_MANY seems acceptable for that usage.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This will allow some amount of cleanup to happen before
switching the backends over to enum DisasJumpType.
Reviewed-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Lluís Vilanova <vilanova@ac.upc.edu>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This allows LOAD HALFWORD IMMEDIATE ON CONDITION,
eliminating one insn in some common cases.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This allows using a 3-operand insn form for some arithmetic,
logicals and shifts.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Use a switch instead of searching a table.
Acked-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
migration_incoming_state_destroy doesn't really destroy, it cleans up.
After a loadvm it's called, but the loadvm command can be run twice,
and so destroying an init-once mutex breaks on the second loadvm.
Reported-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170825141940.20740-2-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Stafford Horne <shorne@gmail.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
There's a race if someone does a 'stop' near the end of migrate;
the migration process goes through two runstates:
'finish migrate'
'postmigrate'
If the user issues a 'stop' between the two we end up with invalid
state transitions.
Add the transitions as valid.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170804175011.21944-1-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The function's stated contract is simple enough: "round down to the
nearest power of 2". Suggests the domain is the representable numbers
>= 1, because that's the smallest power of two.
The implementation doesn't check for domain errors, but returns
garbage instead:
* For negative arguments, pow2floor() returns -2^63, which is not even
a power of two, let alone the nearest one.
What sort of works is passing *unsigned* arguments >= 2^63. The
implicit conversion to signed is implementation defined, but
commonly yields the (negative) two's complement. pow2floor() then
returns -2^63. Callers that convert that back to unsigned get the
correct value 2^63.
* For a zero argument, pow2floor() shifts right by 64. Undefined
behavior. Common actual behavior is to shift by 0, yielding -2^63.
Fix by switching from int64_t to uint64_t and amending the contract to
map zero to zero.
Callers are fine with that:
* memory_access_size()
This function makes no sense unless the argument is positive and the
return value fits into int.
* raw_refresh_limits()
Passes an int between 1 and BDRV_REQUEST_MAX_BYTES.
* iscsi_refresh_limits()
Passes an integer between 0 and INT_MAX, converts the result to
uint32_t. Passing zero would be undefined behavior, but commonly
yield zero. The patch gives us the zero without the undefined
behavior.
* cache_init()
Passes a positive int64_t argument.
* xbzrle_cache_resize()
Passes a positive int64_t argument (>= TARGET_PAGE_SIZE, actually).
* spapr_node0_size()
Passes a positive uint64_t argument, and converts the result to
hwaddr, i.e. uint64_t.
* spapr_populate_memory()
Passes a positive hwaddr argument, and converts the result to
hwaddr.
Cc: Juan Quintela <quintela@redhat.com>
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Cc: Eric Blake <eblake@redhat.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1501148776-16890-3-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
If the bdrv_inactivate_all fails near the end of the migration,
the migration will fail and often the only diagnostics in the log
are an I/O error which you can't distinguish from an error on
the socket connection.
Add an error so we know when it's actually a block problem.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170822170212.27347-1-dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
After calling qcow2_inactivate(), all qcow2 caches must be flushed, but this
may not happen, because the last call qcow2_store_persistent_dirty_bitmaps()
can lead to marking l2/refcont cache as dirty.
Let's move qcow2_store_persistent_dirty_bitmaps() before the caсhe flushing
to fix it.
Cc: qemu-stable@nongnu.org
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
block/throttle.c uses existing I/O throttle infrastructure inside a
block filter driver. I/O operations are intercepted in the filter's
read/write coroutines, and referred to block/throttle-groups.c
The driver can be used with the syntax
-drive driver=throttle,file.filename=foo.qcow2,throttle-group=bar
which registers the throttle filter node with the ThrottleGroup 'bar'. The
given group must be created beforehand with object-add or -object.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Currently, we cannot use mttcg for running strong memory model guests
on weak memory model hosts due to missing ordering semantics.
We implicitly generate fence instructions for stronger guests if an
ordering mismatch is detected. We generate fences only for the orders
for which fence instructions are necessary, for example a fence is not
necessary between a store and a subsequent load on x86 since its
absence in the guest binary tells that ordering need not be
ensured. Also note that if we find multiple subsequent fence
instructions in the generated IR, we combine them in the TCG
optimization pass.
This patch allows us to boot an x86 guest on ARM64 hosts using mttcg.
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20170829063313.10237-4-bobby.prani@gmail.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Just make sure that nr_tables is size_t not int.
Once there, do the assert in the right place and be sure that we don't
have a division by zero.
Suggested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Tested-by: Cleber Rosa <crosa@redhat.com>
--
Drop the s/g_new0/g_malloc0/ change.
Avoid division by zero with assert (danp)
We were using -1 instead of the real size because the functions check
what is bigger, size in bytes or the size of the iov. Recent gcc's
barf at this.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Tested-by: Cleber Rosa <crosa@redhat.com>
--
Remove comments about this feature.
Fix missing -1.
We threatened to remove ia64 as host in v2.9.0. Its time has now come.
There are still some usages of defined(__ia64__) throughout the source
code that would be triggered if one were to enable TCI on an ia64 host.
Leave those alone for now.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
tests/vhost-user-test keeps failing on build-system since Aug 15:
ERROR:tests/vhost-user-test.c:835:test_flags_mismatch: child process (/i386/vhost-user/flags-mismatch/subprocess [4836]) failed unexpectedly
...
ERROR:tests/vhost-user-test.c:807:test_connect_fail: child process (/x86_64/vhost-user/connect-fail/subprocess [58910]) failed unexpectedly
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170905180602.28698-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit 206a0fc75d.
The linux-headers directory is for kernel headers which we keep in
sync with the upstream kernel via scripts/update-linux-headers.sh, so
we shouldn't be applying our code cleanups to it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ThrottleGroup is converted to an object. This will allow the future
throttle block filter drive easy creation and configuration of throttle
groups in QMP and cli.
A new QAPI struct, ThrottleLimits, is introduced to provide a shared
struct for all throttle configuration needs in QMP.
ThrottleGroups can be created via CLI as
-object throttle-group,id=foo,x-iops-total=100,x-..
where x-* are individual limit properties. Since we can't add non-scalar
properties in -object this interface must be used instead. However,
setting these properties must be disabled after initialization because
certain combinations of limits are forbidden and thus configuration
changes should be done in one transaction. The individual properties
will go away when support for non-scalar values in CLI is implemented
and thus are marked as experimental.
ThrottleGroup also has a `limits` property that uses the ThrottleLimits
struct. It can be used to create ThrottleGroups or set the
configuration in existing groups as follows:
{ "execute": "object-add",
"arguments": {
"qom-type": "throttle-group",
"id": "foo",
"props" : {
"limits": {
"iops-total": 100
}
}
}
}
{ "execute" : "qom-set",
"arguments" : {
"path" : "foo",
"property" : "limits",
"value" : {
"iops-total" : 99
}
}
}
This also means a group's configuration can be fetched with qom-get.
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Some trivial fixes/cleanup and a fix to cause QEMU to error out gracefully
instead of aborting.
# gpg: Signature made Tue 05 Sep 2017 16:57:19 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
virtfs: error out gracefully when mandatory suboptions are missing
9pfs: local: clarify fchmodat_nofollow() implementation
fsdev: fix memory leak in main()
9pfs: avoid sign conversion error simplifying the code
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We internally convert -virtfs to -fsdev/-device. If the user doesn't
provide the path or security_model suboptions, and the fsdev backend
requires them, we hit an assertion when populating the internal -fsdev
option:
util/qemu-option.c:547: opt_set: Assertion `opt->str' failed.
Aborted (core dumped)
Let's test the suboption presence on the command line before trying
to set it in the internal -fsdev option, and let the backend code
error out gracefully (ie, like it already does when the user passes
-fsdev on the command line).
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Since fchmodat(2) on Linux doesn't support AT_SYMLINK_NOFOLLOW, we have to
implement it using workarounds. There are two different ways, depending on
whether the system supports O_PATH or not.
In the case O_PATH is supported, we rely on the behavhior of openat(2)
when passing O_NOFOLLOW | O_PATH and the file is a symbolic link. Even
if openat_file() already adds O_NOFOLLOW to the flags, this patch makes
it explicit that we need both creation flags to obtain the expected
behavior.
This is only cleanup, no functional change.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Move the CoMutex and CoQueue inits inside throttle_group_register_tgm()
which is called whenever a ThrottleGroupMember is initialized. There's
no need for them to be separate.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
timer_cb() needs to know about the current Aio context of the throttle
request that is woken up. In order to make ThrottleGroupMember backend
agnostic, this information is stored in an aio_context field instead of
accessing it from BlockBackend.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This commit eliminates the 1:1 relationship between BlockBackend and
throttle group state. Users will be able to create multiple throttle
nodes, each with its own throttle group state, in the future. The
throttle group state cannot be per-BlockBackend anymore, it must be
per-throttle node. This is done by gathering ThrottleGroup membership
details from BlockBackendPublic into ThrottleGroupMember and refactoring
existing code to use the structure.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The TLS I/O channel test had mistakenly used && instead
of || when checking for handshake completion. As a
result it could terminate the handshake process before
it had actually completed. This was harmless before but
changes in GNUTLS 3.6.0 exposed this bug and caused the
test suite to fail.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
These functions wait until they are able to read / write the full
requested data buffer(s).
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The non-blocking connect mechanism is obsolete, and it doesn't
work well in inet connection, because it will call getaddrinfo
first and getaddrinfo will blocks on DNS lookups. Since commit
e65c67e4 & d984464e, the non-blocking connect of migration goes
through QIOChannel in a different manner(using a thread), and
nobody use this old non-blocking connect anymore.
Any newly written code which needs a non-blocking connect should
use the QIOChannel code, so we can drop NonBlockingConnectHandler
as a concept entirely.
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
@rpath and @sock_name are not freed and leaked.
[groug, not really leaked since the program exits just after that. But it
is always good practice to free allocated memory]
Signed-off-by: Zhipeng Lu <lu.zhipeng@zte.com.cn>
Signed-off-by: Greg Kurz <groug@kaod.org>
(note this is how other functions also handle the errors).
hw/9pfs/9p.c:948:18: warning: Loss of sign in implicit conversion
offset = err;
^~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Switch from atexit.register() to a more elegant idiom of declaring
resources in a with statement:
with FilePath('monitor.sock') as monitor_path,
VM() as vm:
...
The files and VMs will be automatically cleaned up whether the test
passes or fails.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170824072202.26818-4-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The scratch/ (TEST_DIR) directory is not automatically cleaned up after
test execution. It is the responsibility of tests to remove any files
they create.
A nice way of doing this is to declare files at the beginning of the
test and automatically remove them with a context manager:
with iotests.FilePath('test.img') as img_path:
qemu_img(...)
qemu_io(...)
# img_path is guaranteed to be deleted here
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170824072202.26818-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
There are a number of ways to ensure that the QEMU process is shut down
when the test ends, including atexit.register(), try: finally:, or
unittest.teardown() methods. All of these require extra code and the
programmer must remember to add vm.shutdown().
A nice solution is context managers:
with VM(binary) as vm:
...
# vm is guaranteed to be shut down here
Cc: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 20170824072202.26818-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
As future sun4u PCI topologies place the ebus containing the in-built devices
behind a PCI bridge, add a busA property to the PBM PCI bridge that is then
used to allow IO accesses by default.
This allows early fw_cfg/NVRAM/serial access to occur even before OpenBIOS
has had a chance to configure the PCI bridges.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Rather than referring to the PCI busses as bus2 and bus3, refer to them as
busA and busB as per the documentation. Also replace the long bus names with
the shorter pciA and pciB aliases (to make it easier to attach additional
devices to either from the command line).
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
In order to wire up the ebus PCI address spaces differently then we need
access to the underlying PCIDevice.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Omitting the check for whether bdrv_getlength() and bdrv_truncate()
failed meant that it was theoretically possible to return an
incorrect offset to the caller. More likely, conditions for either
of these functions to fail would also cause one of our other calls
(such as bdrv_pread() or bdrv_pwrite_sync()) to also fail, but
auditing that we are safe is difficult compared to just patching
things to always forward on the error rather than ignoring it.
Use osdep.h macros instead of open-coded rounding while in the
area.
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The old signature has an ambiguous meaning for a return of 0:
either no allocation was requested or necessary, or an error
occurred (but any errno associated with the error is lost to
the caller, which then has to assume EIO).
Better is to follow the example of qcow2, by changing the
signature to have a separate return value that cleanly
distinguishes between failure and success, along with a
parameter that cleanly holds a 64-bit value. Then update all
callers.
While auditing that all return paths return a negative errno
(rather than -1), I also simplified places where we can pass
NULL rather than a local Error that just gets thrown away.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_co_get_block_status_from_file() and
bdrv_co_get_block_status_from_backing() set *file to bs->file and
bs->backing respectively, so that bdrv_co_get_block_status() can recurse
to them. Future block drivers won't have to duplicate code to implement
this.
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Now that bdrv_truncate is passed to bs->file by default, remove the
callback from block/blkdebug.c and set is_filter to true. is_filter also gives
access to other callbacks that are forwarded automatically to bs->file for
filters.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This function is not used anywhere, so remove it.
Markus Armbruster adds:
The i82078 floppy device model used to call bdrv_media_changed() to
implement its media change bit when backed by a host floppy. This
went away in 21fcf36 "fdc: simplify media change handling".
Probably broke host floppy media change. Host floppy pass-through
was dropped in commit f709623. bdrv_media_changed() has never been
used for anything else. Remove it.
(Source is Message-ID: <87y3ruaypm.fsf@dusky.pond.sub.org>)
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The following functions fail if bs->drv is a filter and does not
implement them:
bdrv_probe_blocksizes
bdrv_probe_geometry
bdrv_truncate
bdrv_has_zero_init
bdrv_get_info
Instead, the call should be passed to bs->file if it exists, to allow
filter drivers to support those methods without implementing them. This
commit makes `drv->is_filter = true` imply that these callbacks will be
forwarded to bs->file by default, so disabling support for these
functions must be done explicitly.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
target-arm:
* collection of M profile cleanups and minor bugfixes
* loader: handle ELF files with overlapping zero-init data
* virt: allow PMU instantiation with userspace irqchip
* wdt_aspeed: Add support for the reset width register
* cpu: Define new cpu_transaction_failed() hook
* Mark some SoC devices as not user-creatable
* arm: Fix aa64 ldp register writeback
* arm_gicv3_kvm: Fix compile warning
# gpg: Signature made Mon 04 Sep 2017 17:20:40 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170904-2: (33 commits)
arm_gicv3_kvm: Fix compile warning
target/arm: Fix aa64 ldp register writeback
hw/arm/digic: Mark device with user_creatable = false
hw/arm/aspeed_soc: Mark devices as user_creatable = false
target/arm: Allow deliver_fault() caller to specify EA bit
target/arm: Factor out fault delivery code
cputlb: Support generating CPU exceptions on memory transaction failures
cpu: Define new cpu_transaction_failed() hook
memory.h: Move MemTxResult type to memattrs.h
aspeed_soc: Propagate silicon-rev to watchdog
watchdog: wdt_aspeed: Add support for the reset width register
target/arm/kvm: pmu: improve error handling
hw/arm/virt: allow pmu instantiation with userspace irqchip
target/arm/kvm: pmu: split init and set-irq stages
hw/arm/virt: add pmu interrupt state
hw/arm: use defined type name instead of hard-coded string
loader: Ignore zero-sized ELF segments
loader: Handle ELF files with overlapping zero-initialized data
nvic: Implement "user accesses BusFault" SCS region behaviour
armv7m_nvic.h: Move from include/hw/arm to include/hw/intc
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix the following warning:
/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: warning: logical not is only applied to the left hand side of this bitwise operator [-Wlogical-not-parentheses]
if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
^ ~
/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: note: add parentheses after the '!' to evaluate the bitwise operator first
if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
^
/home/pranith/qemu/hw/intc/arm_gicv3_kvm.c:296:17: note: add parentheses around left hand side expression to silence this warning
if (!c->gicr_ctlr & GICR_CTLR_ENABLE_LPIS) {
^
This logic error meant we were not setting the PTZ
bit when we should -- luckily as the comment suggests
this wouldn't have had any effects beyond making GIC
initialization take a little longer.
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-id: 20170829173226.7625-1-bobby.prani@gmail.com
Cc: qemu-stable@nongnu.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU currently shows some unexpected behavior when the user trys to
do a "device_add digic" on an unrelated ARM machine like integratorcp
in "-nographic" mode (the device_add command does not immediately
return to the monitor prompt), and trying to "device_del" the device
later results in a "qemu/qdev-monitor.c:872:qdev_unplug: assertion
failed: (hotplug_ctrl)" error condition.
Looking at the realize function of the device, it uses serial_hds
directly and this means that the device can not be added a second
time, so let's simply mark it with "user_creatable = false" now.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU currently aborts if the user is accidentially trying to
do something like this:
$ aarch64-softmmu/qemu-system-aarch64 -S -M integratorcp -nographic
QEMU 2.9.93 monitor - type 'help' for more information
(qemu) device_add ast2400
Unexpected error in error_set_from_qdev_prop_error()
at hw/core/qdev-properties.c:1032:
Aborted (core dumped)
The ast2400 SoC devices are clearly not creatable by the user since
they are using the serial_hds and nd_table arrays directly in their
realize function, so mark them with user_creatable = false.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For external aborts, we will want to be able to specify the EA
(external abort type) bit in the syndrome field. Allow callers of
deliver_fault() to do that by adding a field to ARMMMUFaultInfo which
we use when constructing the syndrome values.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We currently have some similar code in tlb_fill() and in
arm_cpu_do_unaligned_access() for delivering a data abort or prefetch
abort. We're also going to want to do the same thing to handle
external aborts. Factor out the common code into a new function
deliver_fault().
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Acked-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Call the new cpu_transaction_failed() hook at the places where
CPU generated code interacts with the memory system:
io_readx()
io_writex()
get_page_addr_code()
Any access from C code (eg via cpu_physical_memory_rw(),
address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions
via cpu_transaction_failed(). Handling for transactions failures for
this kind of call should be done by using a function which returns a
MemTxResult and treating the failure case appropriately in the
calling code.
In an ideal world we would not generate CPU exceptions for
instruction fetch failures in get_page_addr_code() but instead wait
until the code translation process tried a load and it failed;
however that change would require too great a restructuring and
redesign to attempt at this point.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Currently we have a rather half-baked setup for allowing CPUs to
generate exceptions on accesses to invalid memory: the CPU has a
cpu_unassigned_access() hook which the memory system calls in
unassigned_mem_write() and unassigned_mem_read() if the current_cpu
pointer is non-NULL. This was originally designed before we
implemented the MemTxResult type that allows memory operations to
report a success or failure code, which is why the hook is called
right at the bottom of the memory system. The major problem with
this is that it means that the hook can be called even when the
access was not actually done by the CPU: for instance if the CPU
writes to a DMA engine register which causes the DMA engine to begin
a transaction which has been set up by the guest to operate on
invalid memory then this will casue the CPU to take an exception
incorrectly. Another minor problem is that currently if a device
returns a transaction error then this won't turn into a CPU exception
at all.
The right way to do this is to have allow the CPU to respond
to memory system transaction failures at the point where the
CPU specific code calls into the memory system.
Define a new QOM CPU method and utility function
cpu_transaction_failed() which is called in these cases.
The functionality here overlaps with the existing
cpu_unassigned_access() because individual target CPUs will
need some work to convert them to the new system. When this
transition is complete we can remove the old cpu_unassigned_access()
code.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Move the MemTxResult type to memattrs.h. We're going to want to
use it in cpu/qom.h, which doesn't want to include all of
memory.h. In practice MemTxResult and MemTxAttrs are pretty
closely linked since both are used for the new-style
read_with_attrs and write_with_attrs callbacks, so memattrs.h
is a reasonable home for this rather than creating a whole
new header file for it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
This is required to configure differences in behaviour between the
AST2400 and AST2500 watchdog IPs.
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The reset width register controls how the pulse on the SoC's WDTRST{1,2}
pins behaves. A pulse is emitted if the external reset bit is set in
WDT_CTRL. On the AST2500 WDT_RESET_WIDTH can consume magic bit patterns
to configure push-pull/open-drain and active-high/active-low
behaviours and thus needs some special handling in the write path.
As some of the capabilities depend on the SoC version a silicon-rev
property is introduced, which is used to guard version-specific
behaviour.
Signed-off-by: Andrew Jeffery <andrew@aj.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
If a KVM PMU init or set-irq attr call fails we just silently stop
the PMU DT node generation. The only way they could fail, though,
is if the attr's respective KVM has-attr call fails. But that should
never happen if KVM advertises the PMU capability, because both
attrs have been available since the capability was introduced. Let's
just abort if this should-never-happen stuff does happen, because,
if it does, then something is obviously horribly wrong.
Signed-off-by: Andrew Jones <drjones@redhat.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Message-id: 1500471597-2517-5-git-send-email-drjones@redhat.com
[PMM: change kvm32.c kvm_arm_pmu_init() to the new API too]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For embedded systems, notably ARM, one common use of ELF
file segments is that the 'physical addresses' represent load addresses
and the 'virtual addresses' execution addresses, such that
the load addresses are packed into ROM or flash, and the
relocation and zero-initialization of data is done at runtime.
This means that the 'memsz' in the segment header represents
the runtime size of the segment, but the size that needs to
be loaded is only the 'filesz'. In particular, paddr+memsz
may overlap with the next segment to be loaded, as in this
example:
0x70000001 off 0x00007f68 vaddr 0x00008150 paddr 0x00008150 align 2**2
filesz 0x00000008 memsz 0x00000008 flags r--
LOAD off 0x000000f4 vaddr 0x00000000 paddr 0x00000000 align 2**2
filesz 0x00000124 memsz 0x00000124 flags r--
LOAD off 0x00000218 vaddr 0x00000400 paddr 0x00000400 align 2**3
filesz 0x00007d58 memsz 0x00007d58 flags r-x
LOAD off 0x00007f70 vaddr 0x20000140 paddr 0x00008158 align 2**3
filesz 0x00000a80 memsz 0x000022f8 flags rw-
LOAD off 0x000089f0 vaddr 0x20002438 paddr 0x00008bd8 align 2**0
filesz 0x00000000 memsz 0x00004000 flags rw-
LOAD off 0x000089f0 vaddr 0x20000000 paddr 0x20000000 align 2**0
filesz 0x00000000 memsz 0x00000140 flags rw-
where the segment at paddr 0x8158 has a memsz of 0x2258 and
would overlap with the segment at paddr 0x8bd8 if QEMU's loader
tried to honour it. (At runtime the segments will not overlap
since their vaddrs are more widely spaced than their paddrs.)
Currently if you try to load an ELF file like this with QEMU then
it will fail with an error "rom: requested regions overlap",
because we create a ROM image for each segment using the memsz
as the size.
Support ELF files using this scheme, by truncating the
zero-initialized part of the segment if it would overlap another
segment. This will retain the existing loader behaviour for
all ELF files we currently accept, and also accept ELF files
which only need 'filesz' bytes to be loaded.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1502116754-18867-2-git-send-email-peter.maydell@linaro.org
The ARMv7M architecture specifies that most of the addresses in the
PPB region (which includes the NVIC, systick and system registers)
are not accessible to unprivileged accesses, which should
BusFault with a few exceptions:
* the STIR is configurably user-accessible
* the ITM (which we don't implement at all) is always
user-accessible
Implement this by switching the register access functions
to the _with_attrs scheme that lets us distinguish user
mode accesses.
This allows us to pull the handling of the CCR.USERSETMPEND
flag up to the level where we can make it generate a BusFault
as it should for non-permitted accesses.
Note that until the core ARM CPU code implements turning
MEMTX_ERROR into a BusFault the registers will continue to
act as RAZ/WI to user accesses.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-16-git-send-email-peter.maydell@linaro.org
For M profile the XPSR is a similar but not identical format to the
A profile CPSR/SPSR. (For instance the Thumb bit is in a different
place.) For guest accesses we make the M profile code go through
xpsr_read() and xpsr_write() which handle the different layout.
However for migration we use cpsr_read() and cpsr_write() to
marshal state into and out of the migration data stream. This
is pretty confusing and works more by luck than anything else.
Make M profile migration use xpsr_read() and xpsr_write() instead.
The most complicated part of this is handling the possibility
that the migration source is an older QEMU which hands us a
CPSR format value; helpfully we can always tell the two apart.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-11-git-send-email-peter.maydell@linaro.org
We currently store the M profile CPU register state PRIMASK and
FAULTMASK in the daif field of the CPU state in its I and F
bits. This is a legacy from the original implementation, which
tried to share the cpu_exec_interrupt code between A profile
and M profile. We've since separated out the two cases because
they are significantly different, so now there is no common
code between M and A profile which looks at env->daif: all the
uses are either in A-only or M-only code paths. Sharing the state
fields now is just confusing, and will make things awkward
when we implement v8M, where the PRIMASK and FAULTMASK
registers are banked between security states.
Switch M profile over to using v7m.faultmask and v7m.primask
fields for these registers.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-10-git-send-email-peter.maydell@linaro.org
Tighten up the T32 decoder in the places where new v8M instructions
will be:
* TT/TTT/TTA/TTAT are in what was nominally LDREX/STREX r15, ...
which is UNPREDICTABLE:
make the UNPREDICTABLE behaviour be to UNDEF
* BXNS/BLXNS are distinguished from BX/BLX via the low 3 bits,
which in previous architectural versions are SBZ:
enforce the SBZ via UNDEF rather than ignoring it, and move
the "ARCH(5)" UNDEF case up so we don't leak a TCG temporary
* SG is in the encoding which would be LDRD/STRD with rn = r15;
this is UNPREDICTABLE and we currently UNDEF:
move this check further up the code so that we don't leak
TCG temporaries in the UNDEF case and have a better place
to put the SG decode.
This means that if a v8M binary is accidentally run on v7M
or if a test case hits something that we haven't implemented
yet the behaviour will be obvious (UNDEF) rather than obscure
(plough on treating it as a different instruction).
In the process, add some comments about the instruction patterns
at these points in the decode. Our Thumb and ARM decoders are
very difficult to understand currently, but gradually adding
comments like this should help to clarify what exactly has
been decoded when.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-5-git-send-email-peter.maydell@linaro.org
Currently get_phys_addr() has PMSAv7 handling before the
"is translation disabled?" check, and then PMSAv5 after it.
Tidy this up by making the PMSAv5 code handle the "MPU disabled"
case itself, so that we have all the PMSA code in one place.
This will make adding the PMSAv8 code slightly cleaner, and
also means that pre-v7 PMSA cores benefit from the MPU lookup
logging that the PMSAv7 codepath had.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-4-git-send-email-peter.maydell@linaro.org
M profile cores can never trap on WFI or WFE instructions. Check for
M profile in check_wfx_trap() to ensure this.
The existing code will do the right thing for v7M cores because
the hcr_el2 and scr_el3 registers will be all-zeroes and so we
won't attempt to trap, but when we start setting ARM_FEATURE_V8
for v8M cores the v8A handling of SCTLR.nTWE and .nTWI will not
give the right results.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 1501692241-23310-3-git-send-email-peter.maydell@linaro.org
QAPI patches for 2017-09-01
# gpg: Signature made Mon 04 Sep 2017 12:30:31 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2017-09-01-v3: (47 commits)
qapi: drop the sentinel in enum array
qapi: Change data type of the FOO_lookup generated for enum FOO
qapi: Convert indirect uses of FOO_lookup[...] to qapi_enum_lookup()
qapi: Mechanically convert FOO_lookup[...] to FOO_str(...)
qapi: Generate FOO_str() macro for QAPI enum FOO
qapi: Avoid unnecessary use of enum lookup table's sentinel
qapi: Use qapi_enum_parse() in input_type_enum()
crypto: Use qapi_enum_parse() in qcrypto_block_luks_name_lookup()
quorum: Use qapi_enum_parse() in quorum_open()
block: Use qemu_enum_parse() in blkdebug_debug_breakpoint()
hmp: Use qapi_enum_parse() in hmp_migrate_set_parameter()
hmp: Use qapi_enum_parse() in hmp_migrate_set_capability()
tpm: Clean up model registration & lookup
tpm: Clean up driver registration & lookup
qapi: Drop superfluous qapi_enum_parse() parameter max
qapi: Update qapi-code-gen.txt examples to match current code
qapi-schema: Improve section headings
qapi-schema: Move queries from common.json to qapi-schema.json
qapi-schema: Make block-core.json self-contained
qapi-schema: Fold event.json back into qapi-schema.json
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently, a FOO_lookup is an array of strings terminated by a NULL
sentinel.
A future patch will generate enums with "holes". NULL-termination
will cease to work then.
To prepare for that, store the length in the FOO_lookup by wrapping it
in a struct and adding a member for the length.
The sentinel will be dropped next.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170822132255.23945-13-marcandre.lureau@redhat.com>
[Basically redone]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-16-git-send-email-armbru@redhat.com>
[Rebased]
Currently, the FOO_lookup[] generated for QAPI enum types are
terminated by a NULL sentinel.
A future patch will generate enums with "holes". NULL-termination
will cease to work then.
To prepare for that, replace "have we reached the sentinel?"
predicates by "have we reached the FOO__MAX value?" predicates.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-12-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
The error message on invalid blkdebug events changes from
qemu-system-x86_64: LOCATION: Invalid event name "VALUE"
to
qemu-system-x86_64: LOCATION: invalid parameter value: VALUE
Slight degradation, but the message is sub-par even before the patch.
When complaining about a parameter value, both parameter name and
value should be mentioned, as the value may well not be unique. Left
for another day.
Also left is the error message's unhelpful location: it points to the
config=FILENAME rather than into that file.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170822132255.23945-11-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, commit message rewritten]
Cc: Kevin Wolf <kwolf@redhat.com>
Cc: Max Reitz <mreitz@redhat.com>
Cc: qemu-block@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-8-git-send-email-armbru@redhat.com>
We have a strict separation between enum TpmModel and tpm_models[]:
* TpmModel may have any number of members. It just happens to have one.
* tpm_register_model() uses the first empty slot in tpm_models[].
If you register more than tpm_models[] has space,
tpn_register_model() fails. Its caller silently ignores the
failure.
Register the same TpmModel more than once has no effect other than
wasting tpm_models[] slots: tpm_model_is_registered() is happy with
the first one it finds.
Since we only ever register one model, and tpm_models[] has space for
just that one, this contraption even works.
Turn tpm_models[] into a straight map from enum TpmType to bool. Much
simpler.
Cc: Stefan Berger <stefanb@us.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-5-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
[Commit message typo fixed]
We have a strict separation between enum TpmType and be_drivers[]:
* TpmType may have any number of members. It just happens to have one.
* tpm_register_driver() uses the first empty slot in be_drivers[].
If you register more than tpm_models[] has space,
tpm_register_driver() fails. Its caller silently ignores the
failure.
If you register more than one with a given TpmType,
tpm_display_backend_drivers() will shows all of them, but
tpm_driver_find_by_type() and tpm_get_backend_driver() will find
only the one one that registered first.
Since we only ever register one driver, and be_drivers[] has space for
just that one, this contraption even works.
Turn be_drivers[] into a straight map from enum TpmType to driver.
Much simpler, and has a decent chance to actually work should we ever
acquire additional drivers.
While there, use qapi_enum_parse() in tpm_get_backend_driver().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170822132255.23945-8-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[Rebased, superfluous initializer dropped, commit message rewritten]
Cc: Stefan Berger <stefanb@us.ibm.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503564371-26090-4-git-send-email-armbru@redhat.com>
The generated QEMU QMP reference is now structured as follows:
1.1 Introduction
1.2 Stability Considerations
1.3 Common data types
1.4 Socket data types
1.5 VM run state
1.6 Cryptography
1.7 Block devices
1.7.1 Block core (VM unrelated)
1.7.2 QAPI block definitions (vm unrelated)
1.8 Character devices
1.9 Net devices
1.10 Rocker switch device
1.11 TPM (trusted platform module) devices
1.12 Remote desktop
1.12.1 Spice
1.12.2 VNC
1.13 Input
1.14 Migration
1.15 Transactions
1.16 Tracing
1.17 QMP introspection
1.18 Miscellanea
Section "1.18 Miscellanea" is still too big: it documents 134 symbols.
Section "1.7.1 Block core (VM unrelated)" is also rather big: 128
symbols. All the others are of reasonable size.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503602048-12268-17-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Except for block-core.json, the sub-schemas are self-contained: if
they use a symbol defined in another sub-schema, they include that
sub-schema. To check, feed the sub-schema to qapi2texi (or any other
QAPI generator) along with the pragma from qapi-schema.json.
Fix up things to make block-core.json self-contained, too.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1503602048-12268-15-git-send-email-armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Bug: section "Rocker switch device" starts with the rocker stuff, but
then has unrelated stuff, like ReplayMode, xen-load-devices-state, ...
Cause: rocker.json is included in the middle of section "QMP commands".
Fix: include it in a sane place, namely next to the other sub-schemas.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <1503602048-12268-4-git-send-email-armbru@redhat.com>
Bug: introspection documentation is in section "Tracing commands".
Cause: sub-schema qapi/introspect.json lacks a section header, and
therefore goes into whatever section precedes its include.
Fix: add a section header.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <1503602048-12268-3-git-send-email-armbru@redhat.com>
Documentation generated with qapi2texi.py is in source order, with
included sub-schemas inserted at the first include directive
(subsequent include directives have no effect). To get a sane and
stable order, it's best to include each sub-schema just once, or
include it first in qapi-schema.json. Document that.
While there, drop a few redundant comments.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <1503602048-12268-2-git-send-email-armbru@redhat.com>
We check that all members of the QLit list are also in the QList. We
neglect to check the other direction. Fix that.
While there, use QLIST_FOREACH_ENTRY() to simplify the code and break
the loop on the first mismatch.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170825105913.4060-13-marcandre.lureau@redhat.com>
[Commit message improved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
We check that all members of the QLit dictionary are also in the
QDict. We neglect to check the other direction.
Comparing the number of members suffices, because QDict can't
contain duplicate members, and putting duplicates in a QLit is a
programming error.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170825105913.4060-12-marcandre.lureau@redhat.com>
[Commit message improved]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
compare_litqobj_to_qobj() lacks a qlit_ prefix. Moreover, "compare"
suggests -1, 0, +1 for less than, equal and greater than. The
function actually returns non-zero for equal, zero for unequal.
Rename to qlit_equal_qobject().
Its return type will be cleaned up in the next patch.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170825105913.4060-6-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
A command is a query if it has no side effect and yields a result.
Such commands are typically named query-FOO, but there are exceptions.
The basic idea is to find candidates with query-qmp-schema, filter out
the ones that aren't queries with an explicit blacklist, and test the
remaining ones against a QEMU with no special arguments.
The current blacklist is just add-fd.
The test can't do queries with arguments, because it knows nothing
about the arguments. No coverage for query-cpu-model-baseline,
query-cpu-model-comparison, query-cpu-model-expansion, query-rocker,
query-rocker-ports, query-rocker-of-dpa-flows, and
query-rocker-of-dpa-groups.
Most tested commands are expected to succeed. The test does not check
the return value then.
query-balloon and query-vm-generation-id are expected to fail because
they need a virtio-balloon / vmgenid device to succeed, and this test
is too dumb to set one up. Could be addressed later.
query-acpi-ospm-status and query-hotpluggable-cpus are expected to
fail because they require features provided only by special machine
types, and this test is too dumb to set that up. Could also be
addressed later.
Several commands may either be functional or stubs that always fail,
depending on build configuration. Ideally, the stubs shouldn't be in
query-qmp-schema, but that requires QAPI schema compile-time
configuration, which we don't have, yet. Until we do, we need to
figure out whether a command is a stub. When we have a suitable
CONFIG_FOO preprocessor symbol is available, use that. Else,
simply blacklist the command for now.
We get basic test coverage for the following commands, except as
noted:
qom-list-types
query-acpi-ospm-status (expected to fail)
query-balloon (expected to fail)
query-block
query-block-jobs
query-blockstats
query-chardev
query-chardev-backends
query-command-line-options
query-commands
query-cpu-definitions (blacklisted for now)
query-cpus
query-dump
query-dump-guest-memory-capability
query-events
query-fdsets
query-gic-capabilities (blacklisted for now)
query-hotpluggable-cpus (expected to fail)
query-iothreads
query-kvm
query-machines
query-memdev
query-memory-devices
query-mice
query-migrate
query-migrate-cache-size
query-migrate-capabilities
query-migrate-parameters
query-name
query-named-block-nodes
query-pci (blacklisted for now)
query-qmp-schema
query-rx-filter
query-spice
query-status
query-target
query-tpm
query-tpm-models
query-tpm-types
query-uuid
query-version
query-vm-generation-id (expected to fail)
query-vnc
query-vnc-servers
query-xen-replication-status
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <1502461148-10154-1-git-send-email-armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
[Typos in code under #ifndef and in the commit message fixed]
The test-io-channel-tls test was mistakenly using two of the
same directories as test-crypto-tlssession. This causes a
sporadic failure when using make -j$BIGNUM.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
GNUTLS 3.6.0 marked SHA1 as untrusted for certificates.
Unfortunately the gnutls_x509_crt_sign() method we are
using to create certificates in the test suite is fixed
to always use SHA1. We must switch to a different method
and explicitly ask for SHA256.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
$ make check-speed
tests/benchmark-crypto-hash.c: In function 'test_hash_speed':
tests/benchmark-crypto-hash.c:44:5: error: format '%ld' expects argument of type 'long int', but argument 2 has type 'size_t' [-Werror=format=]
g_print("Testing chunk_size %ld bytes ", chunk_size);
^
tests/benchmark-crypto-hash.c: In function 'main':
tests/benchmark-crypto-hash.c:62:9: error: format '%lu' expects argument of type 'long unsigned int', but argument 4 has type 'size_t' [-Werror=format=]
snprintf(name, sizeof(name), "/crypto/hash/speed-%lu", i);
^
cc1: all warnings being treated as errors
rules.mak:66: recipe for target 'tests/benchmark-crypto-hash.o' failed
make: *** [tests/benchmark-crypto-hash.o] Error 1
Reviewed-by: Longpeng(Mike) <longpeng@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
cpu_tilegx_init() always falls back to TYPE_TILEGX_CPU object
regardless of cpu_model. Put fallback logic into
tilegx_cpu_class_by_name() which would translate any cpu_model
into TYPE_TILEGX_CPU class and replace cpu_tilegx_init()
with cpu_generic_init().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <1503592308-93913-15-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
cpu_nios2_init() always falls back to TYPE_NIOS2_CPU object
regardless of cpu_model. Put fallback logic into
nios2_cpu_class_by_name() which would translate any cpu_model
into TYPE_NIOS2_CPU class and replace cpu_nios2_init()
with cpu_generic_init()
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1503592308-93913-14-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
cpu_mb_init() always falls back to TYPE_MICROBLAZE_CPU object
regardless of cpu_model. Put fallback logic into
mb_cpu_class_by_name() which would translate any cpu_model
into TYPE_MICROBLAZE_CPU class and replace cpu_mb_init()
with cpu_generic_init().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1503592308-93913-13-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
drop custom cpu_hppa_init() in favor of cpu_generic_init(),
to make cpu_generic_init() work all we need is to provide
cc->class_by_name callback that would resolve any cpu_model
to the sole TYPE_HPPA_CPU to match current behaviour.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1503592308-93913-11-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
cpu_alpha_init() used to provide default fallback if invalid
(i.e. non existent) cpu_model were provided.
dp264 machine provides its own default so sole user of fallback
is [bsd|linux]-user targets which specifies 'any' cpu model that
fallbacks to "ev67" in cpu_alpha_init(). Push fallback handling
into alpha_cpu_class_by_name() and replace cpu_alpha_init() with
cpu_generic_init().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Richard Henderson <rth@twiddle.net>
Message-Id: <1503592308-93913-10-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
cpu_s390x_init() is used only *-user targets indirectly
via cpu_init() macro and has a hack to assign ids to created
cpus (I'm not sure if 'id' really matters to *-user emulation).
So to on safe side, instead of having custom wrapper to do numbering
replace it with cpu_generic_init() and use S390CPUClass::next_cpu_id
which could serve the same purpose as static variable and move cpu->id
initialization to s390_cpu_initfn for CONFIG_USER_ONLY use-case.
PS:
ifdef is ugly but it allows us to hide s390x detail that isn't
set by *-user targets and reuse generic cpu creation utility
for btoh machine and user emulation.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <1504185578-80843-1-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
with features converted to properties we can use the same
approach as x86 for features parsing and drop legacy
approach that manipulated CPU instance directly.
New sparc_cpu_parse_features() will allow only +-feat
and explicitly disable feat=on|off syntax for now.
With that in place and sparc_cpu_parse_features() providing
generic CPUClass::parse_features callback, the cpu_sparc_init()
will do the same job as cpu_generic_init() so replace content
of cpu_sparc_init() with it.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1503672460-109436-1-git-send-email-imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
SPARCCPU::env was initialized from previously set properties
(with help of sparc_cpu_parse_features) in cpu_sparc_register().
However there is not reason to keep it there as this task is
typically done at realize time. So move post properties
initialization into sparc_cpu_realizefn, which brings
cpu_sparc_init() closer to cpu_generic_init().
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1503592308-93913-6-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
SPARC is the last target that uses legacy way of parsing
and initializing cpu features, drop legacy approach and
convert features to properties so that SPARC could as minimum
benefit from generic cpu_generic_init(), common with
x86 +-feat parser
PS:
the main purpose is to remove legacy way of cpu creation as
a blocker for unifying cpu creation code across targets.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <1503592308-93913-5-git-send-email-imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
since commit ( 9262685b cpu: Factor out cpu_generic_init() )
features parsed by it were truncated only to the 1st feature
after CPU name due to fact that
featurestr = strtok(NULL, ",");
cc->parse_features(cpu, featurestr, &err);
would extract exactly one feature and parse_features() callback
would parse it and only it leaving the rest of features ignored.
Reuse approach from x86 custom impl. i.e. replace strtok() token
parsing with g_strsplit(), which would split feature string in
2 parts name and features list and pass the later to
parse_features() callback.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Message-Id: <1503592308-93913-2-git-send-email-imammedo@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Add a new base CPU model called 'EPYC' to model processors from AMD EPYC
family (which includes EPYC 76xx,75xx,74xx, 73xx and 72xx).
The following features bits have been added/removed compare to Opteron_G5
Added: monitor, movbe, rdrand, mmxext, ffxsr, rdtscp, cr8legacy, osvw,
fsgsbase, bmi1, avx2, smep, bmi2, rdseed, adx, smap, clfshopt, sha
xsaveopt, xsavec, xgetbv1, arat
Removed: xop, fma4, tbm
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Eduardo Habkost <ehabkost@redhat.com>
Cc: Tom Lendacky <Thomas.Lendacky@amd.com>
Signed-off-by: Brijesh Singh <brijesh.singh@amd.com>
Message-Id: <20170815170051.127257-1-brijesh.singh@amd.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The helper can be used for CPU object lookup using the CPU's
arch-specific ID (the one returned by CPUClass::get_arch_id()).
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
[Yi Wang: Added documentation comments]
Signed-off-by: Yi Wang <wang.yi59@zte.com.cn>
Signed-off-by: Yun Liu <liu.yunh@zte.com.cn>
[ehabkost: extracted cpu_by_arch_id() to a separate patch]
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The errp argument is ignored by all implementations of the
method, and user_creatable_del() would break if any
implementation set an error (because it calls error_setg(errp) if
the function returns false). Remove the unused parameter.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170829220337.23427-1-ehabkost@redhat.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
The chunk size sanity check in qxl_render_cursor works for
SPICE_CURSOR_TYPE_ALPHA cursors only. So support for
SPICE_CURSOR_TYPE_MONO cursors must be broken for ages without anyone
noticing. Most likely it simply isn't used any more by guest drivers.
Drop the dead code.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170828123933.30323-2-kraxel@redhat.com
Instead pass around the address (aka offset into vga memory).
Add vga_read_* helper functions which apply vbe_size_mask to
the address, to make sure the address stays within the valid
range, similar to the cirrus blitter fixes (commits ffaf857778
and 026aeffcb4).
Impact: DoS for privileged guest users. qemu crashes with
a segfault, when hitting the guard page after vga memory
allocation, while reading vga memory for display updates.
Fixes: CVE-2017-13672
Cc: P J P <ppandit@redhat.com>
Reported-by: David Buchanan <d@vidbuchanan.co.uk>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170828122906.18993-1-kraxel@redhat.com
vga display update mis-calculated the region for the dirty bitmap
snapshot in case split screen mode is used. This can trigger an
assert in cpu_physical_memory_snapshot_get_dirty().
Impact: DoS for privileged guest users.
Fixes: CVE-2017-13673
Fixes: fec5e8c92b
Cc: P J P <ppandit@redhat.com>
Reported-by: David Buchanan <d@vidbuchanan.co.uk>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170828123307.15392-1-kraxel@redhat.com
The conflict check added by commit c0644771 ("qapi: Reject
alternates that can't work with keyval_parse()") doesn't work
with the following declaration:
{ 'alternate': 'Alt',
'data': { 'one': 'bool',
'two': 'str' } }
It crashes with:
Traceback (most recent call last):
File "./scripts/qapi-types.py", line 295, in <module>
schema = QAPISchema(input_file)
File "/home/ehabkost/rh/proj/virt/qemu/scripts/qapi.py", line 1468, in __init__
self.exprs = check_exprs(parser.exprs)
File "/home/ehabkost/rh/proj/virt/qemu/scripts/qapi.py", line 958, in check_exprs
check_alternate(expr, info)
File "/home/ehabkost/rh/proj/virt/qemu/scripts/qapi.py", line 830, in check_alternate
% (name, key, types_seen[qtype]))
KeyError: 'QTYPE_QSTRING'
This happens because the previously-seen conflicting member
('one') can't be found at types_seen[qtype], but at
types_seen['QTYPE_BOOL'].
Fix the bug by moving the error check to the same loop that adds
new items to types_seen, raising an exception if types_seen[qt]
is already set.
Add two additional test cases that can detect the bug.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170717180926.14924-1-ehabkost@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
# gpg: Signature made Thu 31 Aug 2017 11:29:33 BST
# gpg: using RSA key 0xDAE8E10975969CE5
# gpg: Good signature from "Marc-André Lureau <marcandre.lureau@redhat.com>"
# gpg: aka "Marc-André Lureau <marcandre.lureau@gmail.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* remotes/elmarco/tags/tidy-pull-request: (29 commits)
eepro100: replace g_malloc()+memcpy() with g_memdup()
test-iov: replace g_malloc()+memcpy() with g_memdup()
i386: replace g_malloc()+memcpy() with g_memdup()
i386: introduce ELF_NOTE_SIZE macro
decnumber: use DIV_ROUND_UP
kvm: use DIV_ROUND_UP
i386/dump: use DIV_ROUND_UP
ppc: use DIV_ROUND_UP
msix: use DIV_ROUND_UP
usb-hub: use DIV_ROUND_UP
q35: use DIV_ROUND_UP
piix: use DIV_ROUND_UP
virtio-serial: use DIV_ROUND_UP
console: use DIV_ROUND_UP
monitor: use DIV_ROUND_UP
virtio-gpu: use DIV_ROUND_UP
vga: use DIV_ROUND_UP
ui: use DIV_ROUND_UP
vnc: use DIV_ROUND_UP
vvfat: use DIV_ROUND_UP
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# gpg: Signature made Thu 31 Aug 2017 09:21:49 BST
# gpg: using RSA key 0x9CA4ABB381AB73C8
# gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>"
# gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>"
# Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request:
qcow2: allocate cluster_cache/cluster_data on demand
qemu-doc: Add UUID support in initiator name
tests: migration/guestperf Python 2.6 argparse compatibility
docker.py: Python 2.6 argparse compatibility
scripts: add argparse module for Python 2.6 compatibility
misc: Remove unused Error variables
oslib-posix: Print errors before aborting on qemu_alloc_stack()
throttle: Test the valid range of config values
throttle: Make burst_length 64bit and add range checks
throttle: Make LeakyBucket.avg and LeakyBucket.max integer types
throttle: Remove throttle_fix_bucket() / throttle_unfix_bucket()
throttle: Make throttle_is_valid() a bit less verbose
throttle: Update the throttle_fix_bucket() documentation
throttle: Fix wrong variable name in the header documentation
nvme: Fix get/set number of queues feature, again
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
First batch of s390x patches:
- 2.11 compat machine
- support the new --s390-pgste linker option, making it possible to
avoid enabling the global vm.allocate_pgste systl if all pieces
are in place
- correctly identify some devices as not hotpluggable
- clean up some tests and enable them for s390x
- wire up the diag288 watchdog in tcg
- clean up dependencies on CONFIG_PCI, making it possible to disable
it by hand
- lots of cleanup in target/s390x/
- fix alignment of the ccw1 structure in the s390-ccw bios
- and some more bugfixes
# gpg: Signature made Wed 30 Aug 2017 17:40:34 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# gpg: aka "Cornelia Huck <cohuck@redhat.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170830: (44 commits)
s390x/pci: fixup trap_msix()
pc-bios/s390-ccw.img: update image
s390-ccw: Fix alignment for CCW1
s390x/s390-stattrib: Mark the storage attribute as not user_creatable
target/s390x: cleanup cpu.h
s390x/kvm: move KVM declarations and stubs to separate files
s390x: avoid calling kvm_ functions outside of target/s390x/
target/s390x: move a couple of functions to cpu.c
target/s390x: introduce internal.h
target/s390x: move get_per_in_range() to misc_helper.c
target/s390x: move s390_do_cpu_reset() to diag.c
target/s390x: move psw_key_valid() to mem_helper.c
target/s390x: move cpu_mmu_idx_to_asc() to excp_helper.c
target/s390x: move cc_name() to helper.c
target/s390x: move gtod_*() declarations to s390-virtio.h
s390x: drop inclusion of sysemu/kvm.h from some files
s390x/cpumodel: factor out determination of default model name
target/s390x: no need to pass kvm_state to savevm_gtod handlers
target/s390x: simplify gs_allowed()
target/s390x: simplify ri_allowed()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
I found these pattern via grepping the source tree. I don't have a
coccinelle script for it!
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
I found these pattern via grepping the source tree. I don't have a
coccinelle script for it!
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
The if_fastq and if_batchq contain not only packets, but queues of packets
for the same socket. When sofree frees a socket, it thus has to clear ifq_so
from all the packets from the queues, not only the first.
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add nbd_co_request, to remove code duplications in
nbd_client_co_{pwrite,pread,...} functions. Also this is
needed for further refactoring.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170804151440.320927-8-vsementsov@virtuozzo.com>
[eblake: make nbd_co_request a wrapper, rather than merging two
existing functions]
Signed-off-by: Eric Blake <eblake@redhat.com>
Rename nbd_recv_coroutines_enter_all to nbd_recv_coroutines_wake_all,
as it most probably just adds all recv coroutines into co_queue_wakeup,
rather than directly enter them.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170804151440.320927-9-vsementsov@virtuozzo.com>
[eblake: tweak commit message]
Signed-off-by: Eric Blake <eblake@redhat.com>
Fix nbd_send_request to return int, as it returns a return value
of nbd_write (which is int), and the only user of nbd_send_request's
return value (nbd_co_send_request) consider it as int too.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170804151440.320927-5-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Refactor nbd_receive_reply to return 1 on success, 0 on eof, when no
data was read and <0 for other cases, because returned size of read
data is not actually used.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170804151440.320927-4-vsementsov@virtuozzo.com>
[eblake: tweak function comments]
Signed-off-by: Eric Blake <eblake@redhat.com>
Refactor nbd_read_eof to return 1 on success, 0 on eof, when no
data was read and <0 for other cases, because returned size of
read data is not actually used.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170804151440.320927-3-vsementsov@virtuozzo.com>
[eblake: tweak function comments, rebase to test 083 enhancements]
Signed-off-by: Eric Blake <eblake@redhat.com>
083 only tests TCP. Some failures might be specific to UNIX domain
sockets.
A few adjustments are necessary:
1. Generating a port number and waiting for server startup is
TCP-specific. Use the new nbd-fault-injector.py startup protocol to
fetch the address. This is a little more elegant because we don't
need netstat anymore.
2. The NBD filter does not work for the UNIX domain sockets URIs we
generate and must be extended.
3. Run all tests twice: once for TCP and once for UNIX domain sockets.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170829122745.14309-4-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Currently 083 waits for the nbd-fault-injector.py server to start up by
looping until netstat shows the TCP listen socket.
The startup protocol can be simplified by passing a 0 port number to
nbd-fault-injector.py. The kernel will allocate a port in bind(2) and
the final port number can be printed by nbd-fault-injector.py.
This should make it slightly nicer and less TCP-specific to wait for
server startup. This patch changes nbd-fault-injector.py, the next one
will rewrite server startup in 083.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170829122745.14309-3-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The following segfault is encountered if the NBD server closes the UNIX
domain socket immediately after negotiation:
Program terminated with signal SIGSEGV, Segmentation fault.
#0 aio_co_schedule (ctx=0x0, co=0xd3c0ff2ef0) at util/async.c:441
441 QSLIST_INSERT_HEAD_ATOMIC(&ctx->scheduled_coroutines,
(gdb) bt
#0 0x000000d3c01a50f8 in aio_co_schedule (ctx=0x0, co=0xd3c0ff2ef0) at util/async.c:441
#1 0x000000d3c012fa90 in nbd_coroutine_end (bs=bs@entry=0xd3c0fec650, request=<optimized out>) at block/nbd-client.c:207
#2 0x000000d3c012fb58 in nbd_client_co_preadv (bs=0xd3c0fec650, offset=0, bytes=<optimized out>, qiov=0x7ffc10a91b20, flags=0) at block/nbd-client.c:237
#3 0x000000d3c0128e63 in bdrv_driver_preadv (bs=bs@entry=0xd3c0fec650, offset=offset@entry=0, bytes=bytes@entry=512, qiov=qiov@entry=0x7ffc10a91b20, flags=0) at block/io.c:836
#4 0x000000d3c012c3e0 in bdrv_aligned_preadv (child=child@entry=0xd3c0ff51d0, req=req@entry=0x7f31885d6e90, offset=offset@entry=0, bytes=bytes@entry=512, align=align@entry=1, qiov=qiov@entry=0x7ffc10a91b20, f
+lags=0) at block/io.c:1086
#5 0x000000d3c012c6b8 in bdrv_co_preadv (child=0xd3c0ff51d0, offset=offset@entry=0, bytes=bytes@entry=512, qiov=qiov@entry=0x7ffc10a91b20, flags=flags@entry=0) at block/io.c:1182
#6 0x000000d3c011cc17 in blk_co_preadv (blk=0xd3c0ff4f80, offset=0, bytes=512, qiov=0x7ffc10a91b20, flags=0) at block/block-backend.c:1032
#7 0x000000d3c011ccec in blk_read_entry (opaque=0x7ffc10a91b40) at block/block-backend.c:1079
#8 0x000000d3c01bbb96 in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:79
#9 0x00007f3196cb8600 in __start_context () at /lib64/libc.so.6
The problem is that nbd_client_init() uses
nbd_client_attach_aio_context() -> aio_co_schedule(new_context,
client->read_reply_co). Execution of read_reply_co is deferred to a BH
which doesn't run until later.
In the mean time blk_co_preadv() can be called and nbd_coroutine_end()
calls aio_wake() on read_reply_co. At this point in time
read_reply_co's ctx isn't set because it has never been entered yet.
This patch simplifies the nbd_co_send_request() ->
nbd_co_receive_reply() -> nbd_coroutine_end() lifecycle to just
nbd_co_send_request() -> nbd_co_receive_reply(). The request is "ended"
if an error occurs at any point. Callers no longer have to invoke
nbd_coroutine_end().
This cleanup also eliminates the segfault because we don't call
aio_co_schedule() to wake up s->read_reply_co if sending the request
failed. It is only necessary to wake up s->read_reply_co if a reply was
received.
Note this only happens with UNIX domain sockets on Linux. It doesn't
seem possible to reproduce this with TCP sockets.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170829122745.14309-2-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
This is the follow-up patch that was discussed[*] as part of feedback to
qemu-iotest 194.
Changes in this patch:
- Supply 'job-id' parameter to `drive-mirror` invocation.
- Once migration completes, issue QMP `block-job-cancel` command on
the source QEMU to gracefully complete `drive-mirror` operation.
- Once the BLOCK_JOB_COMPLETED event is emitted, stop the NBD server
on the destination QEMU.
- Check for both the events: MIGRATION and BLOCK_JOB_COMPLETED.
With the above, the test will also be (almost) in sync with the
procedure outlined in the document 'live-block-operations.rst'[+]
(section: "QMP invocation for live storage migration with
``drive-mirror`` + NBD").
[*] https://lists.nongnu.org/archive/html/qemu-devel/2017-08/msg04820.html
-- qemu-iotests: add 194 non-shared storage migration test
[+] https://git.qemu.org/gitweb.cgi?p=qemu.git;a=blob;f=docs/interop/live-block-operations.rst
Signed-off-by: Kashyap Chamarthy <kchamart@redhat.com>
Message-Id: <20170829165058.8229-1-kchamart@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Most qcow2 files are uncompressed so it is wasteful to allocate (32 + 1)
* cluster_size + 512 bytes upfront. Allocate s->cluster_cache and
s->cluster_data when the first read operation is performance on a
compressed cluster.
The buffers are freed in .bdrv_close(). .bdrv_open() no longer has any
code paths that can allocate these buffers, so remove the free functions
in the error code path.
This patch can result in significant memory savings when many qcow2
disks are attached or backing file chains are long:
Before 12.81% (1,023,193,088B)
After 5.36% (393,893,888B)
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170821135530.32344-1-stefanha@redhat.com
Cc: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Let's do it just like the other architectures. Introduce kvm-stub.c
for stubs and kvm_s390x.h for the declarations.
Change license to GPL2+ and keep copyright notice.
As we are dropping the sysemu/kvm.h include from cpu.h, fix up includes.
Suggested-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170818114353.13455-18-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
cpu.h should only contain what really has to be accessed outside of
target/s390x/. Add internal.h which can only be used inside target/s390x/.
Move everything that isn't fast enough to run away and restructure it
right away. We'll move all kvm_* stuff later.
Minor style fixes to avoid checkpatch warning to:
- struct Lowcore: "{" goes into same line as typedef
- struct LowCore: add spaces around "-" in array length calculations
- time2tod() and tod2time(): move "{" to separate line
- get_per_atmid(): add space between ")" and "?". Move cases by one char.
- get_per_atmid(): drop extra paremthesis around (1 << 6)
Change license of new file to GPL2+ and keep copyright notice.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170818114353.13455-15-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
No need for kvm_enabled() as this function is only called from KVM and
there is no reason why it shouldn't be allowed for tcg. It is simply not
available under tcg.
Also, there is no need to check for the machine type anymore. Just like
ri_enabled(), we can directly use the stored flag, which results in
"true" for the "none" machine.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170818114353.13455-5-david@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
QEMU currently aborts if the user tries to create a skey device:
$ s390x-softmmu/qemu-system-s390x -nographic -device s390-skeys-qemu
qemu-system-s390x: hw/s390x/s390-skeys.c:30: s390_get_skeys_device:
Assertion `ss' failed.
Aborted (core dumped)
The storage key devices are only meant to be instantiated one time,
internally. They can not be used by the user, so mark them with
user_creatable = false.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1503569328-22197-1-git-send-email-thuth@redhat.com>
Reviewed-by: Claudio Imbrenda <imbrenda@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
VIRTIO_PCI should properly depend on CONFIG_PCI.
With this change, we can switch off pci for s390x by removing
'CONFIG_PCI=y' from the default config.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
If we do not provide zpci, pci reconfiguration via sclp is not available
either. I/O adapter configuration, however, should always be present.
Rename the values that refer to I/O adapter configuration (instead of only
pci) to make things clearer.
Move length checking of the sccb for I/O adapter configuration into the
common sclp code (out of the pci code). This also fixes an issue that
the pci code would refer to a field in the sccb before checking whether
it was actually long enough.
Check for the adapter type in the sccb and return unrecognized adapter
type if the guest tries to issue I/O adapter configure/deconfigure for
a type other than pci or for pci if the zpci facility is not provided.
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Don't create the s390 pci host bridge if we do not provide the zpci
facility.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The nt2 event class is pci-only - don't look for events if pci is
not in the active cpu model.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The msi routing code in kvm calls some pci functions: provide
some stubs to enable builds without pci.
Also, to make this more obvious, guard them via a pci_available boolean
(which also can be reused in other places).
Fixes: e1d4fb2de ("kvm-irqchip: x86: add msi route notify fn")
Fixes: 767a554a0 ("kvm-all: Pass requester ID to MSI routing functions")
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Nothing in fsdev/ or hw/9pfs/ depends on pci; it should rather depend
on CONFIG_VIRTFS and CONFIG_VIRTIO/CONFIG_XEN only.
Acked-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
KVM guests on s390 need a different page table layout than normal
processes (2kb page table + 2kb page status extensions vs 2kb page table
only). As of today this has to be enabled via the vm.allocate_pgste
sysctl.
Newer kernels (>= 4.12) on s390 check for an S390_PGSTE program header
and enable the pgste page table extensions in that case. This makes the
vm.allocate_pgste sysctl unnecessary. We enable this program header for
the s390 system emulation (qemu-system-s390x) if we build on s390
- for s390 system emulation
- the linker supports --s390-pgste (binutils >= 2.29)
- KVM is enabled
This will allow distributions to disable the global vm.allocate_pgste
sysctl, which will improve the page table allocation for non KVM
processes as only 2kb chunks are necessary.
Cc: Christian Ehrhardt <christian.ehrhardt@canonical.com>
Cc: Alexander Graf <agraf@suse.de>
Cc: Dan Horak <dhorak@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1503483383-199649-1-git-send-email-borntraeger@de.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
QEMU currently aborts when the user tries to hot-unplug a diag288
device:
$ qemu-system-s390x -nographic -nodefaults -S -monitor stdio
QEMU 2.9.92 monitor - type 'help' for more information
(qemu) device_add diag288,id=x
(qemu) device_del x
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)
The device is not designed as hot-pluggable (it should only be used
via the "-watchdog" parameter), so let's simply remove the possibility
to hotplug it to prevent that users can run into this ugly situation.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1502892528-22618-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
While the PoP is silent on the issue, z/VM documentation states
that unknown diagnose codes trigger a specification exception.
We already do that when running with kvm, so change tcg to do so
as well.
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Re-using the boot_sector code buffer from x86 for other architectures
is not very nice, especially if we add more architectures later. It's
also ugly that the test uses a huge pre-initialized array at all - the
size of the executable is very huge due to this array. So let's use a
separate buffer for each architecture instead, allocated from the heap,
so that we really just use the memory that we need.
Suggested-by: Michael Tsirkin <mst@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <1502431076-22849-2-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The s390-ipl device can not be created by the user, since it is meant only
to be instantiated once internally to load the ROMs and kernel. If the user
tries to do a "device_add s390-ipl" via the monitor later, QEMU aborts with
a "ROM images must be loaded at startup" error message.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1502861458-30270-1-git-send-email-thuth@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The minimum Python version supported by QEMU is 2.6. The argparse
standard library module was only added in Python 2.7. Many scripts
would like to use argparse because it supports command-line
sub-commands.
This patch adds argparse. See the top of argparse.py for details.
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: John Snow <jsnow@redhat.com>
Message-id: 20170825155732.15665-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
There's a few cases which we're passing an Error pointer to a function
only to discard it immediately afterwards without checking it. In
these cases we can simply remove the variable and pass NULL instead.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20170829120836.16091-1-berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
If QEMU is running on a system that's out of memory and mmap()
fails, QEMU aborts with no error message at all, making it hard
to debug the reason for the failure.
Add perror() calls that will print error information before
aborting.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170829212053.6003-1-ehabkost@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
LeakyBucket.burst_length is defined as an unsigned integer but the
code never checks for overflows and it only makes sure that the value
is not 0.
In practice this means that the user can set something like
throttling.iops-total-max-length=4294967300 despite being larger than
UINT_MAX and the final value after casting to unsigned int will be 4.
This patch changes the data type to uint64_t. This does not increase
the storage size of LeakyBucket, and allows us to assign the value
directly from qemu_opt_get_number() or BlockIOThrottle and then do the
checks directly in throttle_is_valid().
The value of burst_length does not have a specific upper limit,
but since the bucket size is defined by max * burst_length we have
to prevent overflows. Instead of going for UINT64_MAX or something
similar this patch reuses THROTTLE_VALUE_MAX, which allows I/O bursts
of 1 GiB/s for 10 days in a row.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 1b2e3049803f71cafb2e1fa1be4fb47147a0d398.1503580370.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Both the throttling limits set with the throttling.iops-* and
throttling.bps-* options and their QMP equivalents defined in the
BlockIOThrottle struct are integer values.
Those limits are also reported in the BlockDeviceInfo struct and they
are integers there as well.
Therefore there's no reason to store them internally as double and do
the conversion everytime we're setting or querying them, so this patch
uses uint64_t for those types. Let's also use an unsigned type because
we don't allow negative values anyway.
LeakyBucket.level and LeakyBucket.burst_level do however remain double
because their value changes depending on the fraction of time elapsed
since the previous I/O operation.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: f29b840422767b5be2c41c2dfdbbbf6c5f8fedf8.1503580370.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The throttling code can change internally the value of bkt->max if it
hasn't been set by the user. The problem with this is that if we want
to retrieve the original value we have to undo this change first. This
is ugly and unnecessary: this patch removes the throttle_fix_bucket()
and throttle_unfix_bucket() functions completely and moves the logic
to throttle_compute_wait().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Message-id: 5b0b9e1ac6eb208d709eddc7b09e7669a523bff3.1503580370.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The way the throttling algorithm works is that requests start being
throttled once the bucket level exceeds the burst limit. When we get
there the bucket leaks at the level set by the user (bkt->avg), and
that leak rate is what prevents guest I/O from exceeding the desired
limit.
If we don't allow bursts (i.e. bkt->max == 0) then we can start
throttling requests immediately. The problem with keeping the
threshold at 0 is that it only allows one request at a time, and as
soon as there's a bit of I/O from the guest every other request will
be throttled and performance will suffer considerably. That can even
make the guest unable to reach the throttle limit if that limit is
high enough, and that happens regardless of the block scheduler used
by the guest.
Increasing that threshold gives flexibility to the guest, allowing it
to perform short bursts of I/O before being throttled. Increasing the
threshold too much does not make a difference in the long run (because
it's the leak rate what defines the actual throughput) but it does
allow the guest to perform longer initial bursts and exceed the
throttle limit for a short while.
A burst value of bkt->avg / 10 allows the guest to perform 100ms'
worth of I/O at the target rate without being throttled.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Message-id: 31aae6645f0d1fbf3860fb2b528b757236f0c0a7.1503580370.git.berto@igalia.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The number of queues that should be return by the admin command should:
1) Only mention the number of non-admin queues.
2) It is zero-based, meaning that '0 == one non-admin queue',
'1 == two non-admin queues', and so forth.
Because our `num_queues` means the number of queues _plus_ the admin
queue, then the right calculation for the number returned from the admin
command is `num_queues - 2`, combining the two requirements mentioned.
The issue was discovered by reducing num_queues from 64 to 8 and running
a Linux VM with an SMP parameter larger than that (e.g. 22). It tries to
utilize all queues, and therefore fails with an invalid queue number
when trying to queue I/Os on the last queue.
Signed-off-by: Dan Aloni <dan@kernelim.com>
CC: Alex Friedman <alex@e8storage.com>
CC: Keith Busch <keith.busch@intel.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The following scenario leads to an assertion failure in
qio_channel_yield():
1. Request coroutine calls qio_channel_yield() successfully when sending
would block on the socket. It is now yielded.
2. nbd_read_reply_entry() calls nbd_recv_coroutines_enter_all() because
nbd_receive_reply() failed.
3. Request coroutine is entered and returns from qio_channel_yield().
Note that the socket fd handler has not fired yet so
ioc->write_coroutine is still set.
4. Request coroutine attempts to send the request body with nbd_rwv()
but the socket would still block. qio_channel_yield() is called
again and assert(!ioc->write_coroutine) is hit.
The problem is that nbd_read_reply_entry() does not distinguish between
request coroutines that are waiting to receive a reply and those that
are not.
This patch adds a per-request bool receiving flag so
nbd_read_reply_entry() can avoid spurious aio_wake() calls.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170822125113.5025-1-stefanha@redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Tested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
In the ->inactivate() callbacks, permissions are updated, which
typically involves a recursive check of the whole graph. Setting
BDRV_O_INACTIVE right before doing that creates a state that
bdrv_is_writable() returns false, which causes permission update
failure.
Reorder them so the flag is updated after calling the function. Note
that this doesn't break the assert in bdrv_child_cb_inactivate() because
for any specific BDS, we still update its flags first before calling
->inactivate() on it one level deeper in the recursion.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170823134242.12080-5-famz@redhat.com>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
These two conditions corresponds to mirror job's source and target,
which need to be allowed as they are part of the non-shared storage
migration workflow: failing to inactivate either will result in a
failure during migration completion.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170823134242.12080-3-famz@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
[eblake: improve comment grammar]
Signed-off-by: Eric Blake <eblake@redhat.com>
The 'm->numa_auto_assign_ram = numa_legacy_auto_assign_ram;' line
was supposed to be in pc_i440fx_2_9_machine_options() (see commit
3bfe5716 "numa: equally distribute memory on nodes"), but the
merge commit adb354dd ("Merge remote-tracking branch
'mst/tags/for_upstream' into staging") moved it to the
pc_i440fx_2_10_machine_options().
Move the line back to pc_i440fx_2_9_machine_options().
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-id: 20170818190943.23858-1-ehabkost@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
travis builds fail at HEAD at rc3 master with
block/nbd-client.c: In function ‘nbd_read_reply_entry’:
block/nbd-client.c:110:8: error: ‘ret’ may be used uninitialized in this function [-Werror=uninitialized]
fix it by initializing 'ret' to 0
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ppc patch queue 2017-08-23
This is identical to the pull request from yesterday (20180822),
except that a bug in one patch is fixed so that it doesn't break TCG
on a ppc host.
Last minute ppc related fixes for qemu-2.10. I'm not sure if these
are critical enough to prompt another rc, but I'm submitting them for
consideration.
First, is Cornelia's fix for 480bc11e6 which meant "make check" would
always fail on a ppc host. Tracking that down delayed submission of
the rest of these patches, sorry.
The rest are all fairly important bugfixes for qemu crashes or guest
behaviour regression on ppc. Patches 2-4 specifically are fixes for
regressions from qemu-2.9, caused by the compatibility mode and
hotplug handling cleanups for the pseries machine type.
# gpg: Signature made Wed 23 Aug 2017 01:31:47 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170823:
hw/ppc/spapr_iommu: Fix crash when removing the "spapr-tce-table" device
hw/ppc/spapr_rtc: Mark the RTC device with user_creatable = false
hw/ppc/spapr: Fix segfault when instantiating a 'pc-dimm' without 'memdev'
spapr: Allow configure-connector to be called multiple times
ppc: fix ppc_set_compat() with KVM PR
target/ppc: 'PVR != host PVR' in KVM_SET_SREGS workaround
boot-serial-test: prefer tcg accelerator
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QEMU currently aborts unexpectedly when the user tries to add and
remove a "spapr-tce-table" device:
$ qemu-system-ppc64 -nographic -S -nodefaults -monitor stdio
QEMU 2.9.92 monitor - type 'help' for more information
(qemu) device_add spapr-tce-table,id=x
(qemu) device_del x
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)
The device should not be accessable for the users at all, it's just
used internally, so mark it with user_creatable = false.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QEMU currently aborts unexpectedly when a user tries to do something
like this:
$ qemu-system-ppc64 -nographic -S -nodefaults -monitor stdio
QEMU 2.9.92 monitor - type 'help' for more information
(qemu) device_add spapr-rtc,id=spapr-rtc
(qemu) device_del spapr-rtc
**
ERROR:qemu/qdev-monitor.c:872:qdev_unplug: assertion failed: (hotplug_ctrl)
Aborted (core dumped)
The RTC device is not meant to be hot-pluggable - it's an internal
device only and it even should not be possible to create it a
second time with the "-device" parameter, so let's mark this
with "user_creatable = false".
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
QEMU currently crashes when trying to use a 'pc-dimm' on the pseries
machine without specifying its 'memdev' property. This happens because
pc_dimm_get_memory_region() does not check whether the 'memdev' property
has properly been set by the user. Looking closer at this function, it's
also obvious that it is using &error_abort to call another function - and
this is bad in a function that is used in the hot-plugging calling chain
since this can also cause QEMU to exit unexpectedly.
So let's fix these issues in a proper way now: Add a "Error **errp"
parameter to pc_dimm_get_memory_region() which we use in case the 'memdev'
property has not been set by the user, and which we can use instead of
the &error_abort, and change the callers of get_memory_region() to make
use of this "errp" parameter for proper error checking.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In case of in-kernel memory hot unplug, when the guest is not able
to remove all the LMBs that are requested for removal, it will add back
any LMBs that have been successfully removed. The DR Connectors of
these LMBs wouldn't have been unconfigured and hence the addition of
these LMBs will result in configure-connector call being issued on
LMB DR connectors that are already in configured state. Such
configure-connector calls will fail resulting in a DIMM which is
partially unplugged.
This however worked till recently before we overhauled the DRC
implementation in QEMU. Commit 9d4c0f4f0a: "spapr: Consolidate
DRC state variables" is the first commit where this problem shows up
as per git bisect.
Ideally guest shouldn't be issuing configure-connector call on an
already configured DR connector. However for now, work around this in
QEMU by allowing configure-connector to be called multiple times for
all types of DR connectors.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
[dwg: Corrected buglet that would have initialized fdt pointers ready
for reading on a device not present at reset]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running in KVM PR mode, kvmppc_set_compat() always fail because the
current PR implementation doesn't handle KVM_REG_PPC_ARCH_COMPAT. Now that
the machine code inconditionally calls ppc_set_compat_all() at reset time
to restore the compat mode default value (commit 66d5c492dd), it is
impossible to start a guest with PR:
qemu-system-ppc64: Unable to set CPU compatibility mode in KVM:
Invalid argument
A tentative patch [1] was recently sent by Suraj to address the issue, but
it would prevent the compat mode to be turned off on reset. And we really
don't want to explicitely check for KVM PR. During the patch's review,
David suggested that we should only call the KVM ioctl() if the compat
PVR changes. This allows at least to run with KVM PR, provided no compat
mode is requested from the command line (which should be the case when
running PR nested). This is what this patch does.
While here, we also fix the side effect where KVM would fail but we would
change the CPU state in QEMU anyway.
[1] http://patchwork.ozlabs.org/patch/782039/
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit d5fc133eed ("ppc: Rework CPU compatibility testing
across migration") changed the way cpu_post_load behaves with
the PVR setting, causing an unexpected bug in KVM-HV migrations
between hosts that are compatible (POWER8 and POWER8E, for example).
Even with pvr_match() returning true, the guest freezes right after
cpu_post_load. The reason is that the guest kernel can't handle a
different PVR value other that the running host in KVM_SET_SREGS.
In [1] it was discussed the possibility of a new KVM capability
that would indicate that the guest kernel can handle a different
PVR in KVM_SET_SREGS. Even if such feature is implemented, there is
still the problem with older kernels that will not have this capability
and will fail to migrate.
This patch implements a workaround for that scenario. If running
with KVM, check if the guest kernel does not have the capability
(named here as 'cap_ppc_pvr_compat'). If it doesn't, calls
kvmppc_is_pr() to see if the guest is running in KVM-HV. If all this
happens, set env->spr[SPR_PVR] to the same value as the current
host PVR. This ensures that we allow migrations with 'close enough'
PVRs to still work in KVM-HV but also makes the code ready for
this new KVM capability when it is done.
A new function called 'kvmppc_pvr_workaround_required' was created
to encapsulate the conditions said above and to avoid calling too
many kvm.c internals inside cpu_post_load.
[1] https://lists.gnu.org/archive/html/qemu-ppc/2017-06/msg00503.html
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
[dwg: Fix for the case of using TCG on a PPC host]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Prefer to use the tcg accelarator if it is available: This is our only
real smoke test for tcg, and fast enough to use it for that.
Fixes: 480bc11e6 ("boot-serial-test: fallback to kvm accelerator")
Reported-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mmio-interface device is not something we want to allow
users to create on the command line:
* it is intended as an implementation detail of the memory
subsystem, which gets created and deleted by that
subsystem on demand; it makes no sense to create it
by hand on the command line
* it uses a pointer property 'host_ptr' which can't be
set on the command line
Mark the device as not user_creatable to avoid confusion.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1502807418-9994-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Thomas Huth <thuth@redhat.com>
We are not providing the required single-copy atomic semantics for
the 64-bit operation that is the 32-bit paired load.
At the same time, leave the entire 64-bit value in cpu_exclusive_val
and stop writing to cpu_exclusive_high. This means that we do not
have to re-assemble the 64-bit quantity when it comes time to store.
At the same time, drop a redundant temporary and perform all loads
directly into the cpu_exclusive_* globals.
Tested-by: Alistair Francis <alistair.francis@xilinx.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20170815145714.17635-3-richard.henderson@linaro.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we switched NBD to use coroutines for qemu 2.9 (in particular,
commit a12a712a), we introduced a regression: if a server sends us
garbage (such as a corrupted magic number), we quit the read loop
but do not stop sending further queued commands, resulting in the
client hanging when it never reads the response to those additional
commands. In qemu 2.8, we properly detected that the server is no
longer reliable, and cancelled all existing pending commands with
EIO, then tore down the socket so that all further command attempts
get EPIPE.
Restore the proper behavior of quitting (almost) all communication
with a broken server: Once we know we are out of sync or otherwise
can't trust the server, we must assume that any further incoming
data is unreliable and therefore end all pending commands with EIO,
and quit trying to send any further commands. As an exception, we
still (try to) send NBD_CMD_DISC to let the server know we are going
away (in part, because it is easier to do that than to further
refactor nbd_teardown_connection, and in part because it is the
only command where we do not have to wait for a reply).
Based on a patch by Vladimir Sementsov-Ogievskiy.
A malicious server can be created with the following hack,
followed by setting NBD_SERVER_DEBUG to a non-zero value in the
environment when running qemu-nbd:
| --- a/nbd/server.c
| +++ b/nbd/server.c
| @@ -919,6 +919,17 @@ static int nbd_send_reply(QIOChannel *ioc, NBDReply *reply, Error **errp)
| stl_be_p(buf + 4, reply->error);
| stq_be_p(buf + 8, reply->handle);
|
| + static int debug;
| + static int count;
| + if (!count++) {
| + const char *str = getenv("NBD_SERVER_DEBUG");
| + if (str) {
| + debug = atoi(str);
| + }
| + }
| + if (debug && !(count % debug)) {
| + buf[0] = 0;
| + }
| return nbd_write(ioc, buf, sizeof(buf), errp);
| }
Reported-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170814213426.24681-1-eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
As in the case of nbd_export_new(), bdrv_invalidate_cache() can be
called when migration is still in progress. In this case we are not
ready to tighten the shared permissions fenced by blk->disable_perm.
Defer to a VM state change handler.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170815130740.31229-4-famz@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The 093 throttling test submits twice as many requests as the throttle
limit in order to ensure that we reach the limit. The remaining
requests are left in-flight at the end of each test iteration.
Commit 452589b6b4 ("vl.c/exit: pause cpus
before closing block devices") exposed a hang in 093. This happens
because requests are still in flight when QEMU terminates but
QEMU_CLOCK_VIRTUAL time is frozen. bdrv_drain_all() hangs forever since
throttled requests cannot complete.
Step the clock at the end of each test iteration so in-flight requests
actually finish. This solves the hang and is cleaner than leaving tests
in-flight.
Note that this could also be "fixed" by disabling throttling when drives
are closed in QEMU. That approach has two issues:
1. We must drain requests before disabling throttling, so the hang
cannot be easily avoided!
2. Any time QEMU disables throttling internally there is a chance that
malicious users can abuse the code path to bypass throttling limits.
Therefore it makes more sense to fix the test case than to modify QEMU.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-Id: <20170815130502.8736-1-stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
The simpletrace.py script can pretty-print flight recorder ring buffers.
These are not full simpletrace binary trace files but just the end of a
trace file. There is no header and the event ID mapping information is
often unavailable since the ring buffer may have filled up and discarded
event ID mapping records.
The simpletrace.stp script that generates ring buffer traces uses the
same trace-events-all input file as simpletrace.py. Therefore both
scripts have the same global ordering of trace events. A dynamic event
ID mapping isn't necessary: just use the trace-events-all file as the
reference for how event IDs are numbered.
It is now possible to analyze simpletrace.stp ring buffers again using:
$ ./simpletrace.py trace-events-all path/to/ring-buffer
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170815084430.7128-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
This is a partial revert of commit
7f1b588f20 ("trace: emit name <-> ID
mapping in simpletrace header"), which broke the SystemTap flight
recorder because event mapping records may not be present in the ring
buffer when the trace is analyzed. This means simpletrace.py
--no-header does not know the event ID mapping needed to pretty-print
the trace.
Instead of numbering events dynamically, use a static event ID mapping
as dictated by the event order in the trace-events-all file.
The simpletrace.py script also uses trace-events-all so the next patch
will fix the simpletrace.py --no-header option to take advantage of this
knowledge.
Cc: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170815084430.7128-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
# gpg: Signature made Tue 15 Aug 2017 11:50:36 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/build-and-test-pull-request:
docker: add centos7 image
docker: install more packages on CentOS to extend code coverage
docker: add Xen libs to centos6 image
docker: use one package per line in CentOS config
Makefile: Let "make check-help" work without running ./configure
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Currently if you do "make check-help" in a fresh checkout, only an error
is printed which is not nice:
$ make check-help V=1
cc -nostdlib -o check-help.mo
cc: fatal error: no input files
compilation terminated.
rules.mak:115: recipe for target 'check-help.mo' failed
make: *** [check-help.mo] Error 1
Move the config-host.mak condition into the body of
tests/Makefile.include and always include the rule for check-help.
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170810085025.14076-1-famz@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
This reverts a change that replaced the "rm -f" command with the
undefined variable RM (expected to be set by make), and causes the
"make clean" command to fail for a s390 target:
make[1]: Entering directory '/usr/src/qemu/build/pc-bios/s390-ccw'
rm -f *.timestamp
*.o *.d *.img *.elf *~ *.a
/bin/sh: *.o: command not found
Makefile:39: recipe for target 'clean' failed
make[1]: *** [clean] Error 127
make[1]: Leaving directory '/usr/src/qemu/build/pc-bios/s390-ccw'
Makefile:489: recipe for target 'clean' failed
make: *** [clean] Error 1
Fixes: 3e4415a751 ("pc-bios/s390-ccw: Add core files for the network
bootloading program")
Signed-off-by: Eric Farman <farman@linux.vnet.ibm.com>
Message-Id: <20170814204450.24118-2-farman@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
# gpg: Signature made Mon 14 Aug 2017 13:32:10 BST
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
qemu-doc: Mention host_net_add/-remove in the deprecation chapter
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The two HMP commands host_net_add and -remove have recently been
marked as deprecated, too, so we should now mention them in the
chapter of deprecated features.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
QEMU currently abort()s if the user tries to specify the mmio_interface
device without parameters:
x86_64-softmmu/qemu-system-x86_64 -nographic -device mmio_interface
qemu-system-x86_64: /home/thuth/devel/qemu/util/error.c:57: error_setv:
Assertion `*errp == ((void *)0)' failed.
Aborted (core dumped)
This happens because the realize function is trying to set the errp
twice in this case. After setting an error, the realize function
should immediately return instead.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The check script contains a commented out root user requirement,
probably because of its xfstests heritage. This requirement doesn't
apply to qemu-iotests, so it better be gone.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
The variables FULL_MKFS_OPTIONS and FULL_MOUNT_OPTIONS are commented
out, never used, and even refer to functions that do exist. The last
time these were touched was around 8 years ago, so I guess it's safe
to assume outputting such information on test execution is still on the
radar.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Although this function is used, its implementation does nothing
besides echoing a variable name. There's no need to wrap this
functionality in a function, and based on the one usage it has, it's
not even required to adhere to a convention or code style.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
While Andrew S. Tanenbaum has a point by saying "Never underestimate the
bandwidth of a station wagon full of tapes hurtling down the highway",
we don't support that way of transportation in QEMU yet, so replace the
typo with the correct word "vlan".
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
While Andrew S. Tanenbaum has a point by saying "Never underestimate the
bandwidth of a station wagon full of tapes hurtling down the highway",
we don't support that way of transportation in QEMU yet, so replace the
typo with the correct word "vlan".
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1502365466-19432-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Currently, at least x86_64 and s390x support building with --disable-tcg.
Instead of forcing tcg (which causes the test to fail on such builds),
allow to use kvm as well.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
185 can sometimes produce wrong output like this:
185 2s ... - output mismatch (see 185.out.bad)
--- /work/src/qemu/master/tests/qemu-iotests/185.out 2017-07-14 \
15:14:29.520343805 +0300
+++ 185.out.bad 2017-08-07 16:51:02.231922900 +0300
@@ -37,7 +37,7 @@
{"return": {}}
{"return": {}}
{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, \
"event": "SHUTDOWN", "data": {"guest": false}}
-{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, \
"event": "BLOCK_JOB_CANCELLED", "data": {"device": "disk", \
"len": 4194304, "offset": 4194304, "speed": 65536, "type": \
"mirror"}}
+{"timestamp": {"seconds": TIMESTAMP, "microseconds": TIMESTAMP}, \
"event": "BLOCK_JOB_CANCELLED", "data": {"device": "disk", \
"len": 0, "offset": 0, "speed": 65536, "type": "mirror"}}
=== Start backup job and exit qemu ===
Failures: 185
Failed 1 of 1 tests
This is because, under heavy load, the quit can happen before the first
iteration of the mirror request has occurred. To make sure we've had
time to iterate, let's just add a sleep for 0.5 seconds before quitting.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It is reported that on Windows Subsystem for Linux, ofd operations fail
with -EINVAL. In other words, QEMU binary built with system headers that
exports F_OFD_SETLK doesn't necessarily run in an environment that
actually supports it:
$ qemu-system-aarch64 ... -drive file=test.vhdx,if=none,id=hd0 \
-device virtio-blk-pci,drive=hd0
qemu-system-aarch64: -drive file=test.vhdx,if=none,id=hd0: Failed to unlock byte 100
qemu-system-aarch64: -drive file=test.vhdx,if=none,id=hd0: Failed to unlock byte 100
qemu-system-aarch64: -drive file=test.vhdx,if=none,id=hd0: Failed to lock byte 100
As a matter of fact this is not WSL specific. It can happen when running
a QEMU compiled against a newer glibc on an older kernel, such as in
a containerized environment.
Let's do a runtime check to cope with that.
Reported-by: Andrew Baumann <Andrew.Baumann@microsoft.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Build time check of OFD lock is not sufficient and can cause image open
errors when the runtime environment doesn't support it.
Add a helper function to probe it at runtime, additionally. Also provide
a qemu_has_ofd_lock() for callers to check the status.
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
It's been #if 0'd since its introduction in 2006, commit 585f8587.
We can revive dead code if we need it, but in the meantime, it has
bit-rotted (for example, not checking for failure in bdrv_getlength()).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Commit b43671f8 accidentally broke run_test.sh within tests/multiboot;
due to a subtle change in whitespace.
These two commands produce theh same output (at least, for sane $IFS
of space-tab-newline):
echo -e "...$@..."
echo -e "...$*..."
But that's only because echo inserts spaces between multiple arguments
(the $@ case), while the $* form gives a single argument to echo with
the spaces already present.
But when converting to printf %b, there are no automatic spaces between
multiple arguments, so we HAVE to use $*.
It doesn't help that run_test.sh isn't part of 'make check'.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Just a single fix for an annoying regression introduced in 2.9 when fixing
CVE-2016-9602.
# gpg: Signature made Thu 10 Aug 2017 13:40:28 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: local: fix fchmodat_nofollow() limitations
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This function has to ensure it doesn't follow a symlink that could be used
to escape the virtfs directory. This could be easily achieved if fchmodat()
on linux honored the AT_SYMLINK_NOFOLLOW flag as described in POSIX, but
it doesn't. There was a tentative to implement a new fchmodat2() syscall
with the correct semantics:
https://patchwork.kernel.org/patch/9596301/
but it didn't gain much momentum. Also it was suggested to look at an O_PATH
based solution in the first place.
The current implementation covers most use-cases, but it notably fails if:
- the target path has access rights equal to 0000 (openat() returns EPERM),
=> once you've done chmod(0000) on a file, you can never chmod() again
- the target path is UNIX domain socket (openat() returns ENXIO)
=> bind() of UNIX domain sockets fails if the file is on 9pfs
The solution is to use O_PATH: openat() now succeeds in both cases, and we
can ensure the path isn't a symlink with fstat(). The associated entry in
"/proc/self/fd" can hence be safely passed to the regular chmod() syscall.
The previous behavior is kept for older systems that don't have O_PATH.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Zhi Yong Wu <zhiyong.wu@ucloud.cn>
Acked-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
ppc patch queue 2017-08-09
This series contains a number of bugfixes for ppc and related
machines, for the qemu-2.10.release. Some are true regressions,
others are serious enough and non-invasive enough to fix that it's
worth putting in 2.10 this late.
# gpg: Signature made Wed 09 Aug 2017 07:31:33 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170809:
spapr: Fix bug in h_signal_sys_reset()
spapr_drc: abort if object_property_add_child() fails
target/ppc: Add stub implementation of the PSSCR
target/ppc: Implement TIDR
ppc: fix double-free in cpu_post_load()
booke206: fix MAS update on tlb miss
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
pc, vhost: fixes for rc3
Fix up bugs and warnings in tests. Revert an experimental commit that I
put in by mistake: harmless but useless.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Wed 09 Aug 2017 02:23:17 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
libqtest: always set up signal handler for SIGABRT
libvhost-user: quit when no more data received
net: fix -netdev socket,fd= for UDP sockets
Revert "cpu: add APIs to allocate/free CPU environment"
acpi-test: update expected DSDT files
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The unicast case in h_signal_sys_reset() seems to be broken:
rather than selecting the target CPU, it looks like it will pick
either the first CPU or fail to find one at all.
Fix it by using the search function rather than open coding the
search.
This was found by inspection; the code appears to be unused because
the Linux kernel only uses the broadcast target.
Signed-off-by: Sam Bobroff <sam.bobroff@au1.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
object_property_add_child() can only fail in two cases:
- the child already has a parent, which shouldn't happen since the DRC was
allocated a few lines above
- the parent already has a child with the same name, which would mean the
caller tries to create a DRC that already exists
In both case, this is a QEMU bug and we should abort.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The PSSCR register added in POWER9 controls certain power saving mode
behaviours. Mostly, it's not relevant to TCG, however because qemu
doesn't know about it yet, it doesn't synchronize the state with KVM,
and thus it doesn't get migrated.
To fix that, this adds a minimal stub implementation of the register.
This isn't complete, even to the extent that an implementation is
possible in TCG, just enough to get migration working. We need to
come back later and at least properly filter the various fields in the
register based on privilege level.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
This adds a trivial implementation of the TIDR register added in
POWER9. This isn't particularly important to qemu directly - it's
used by accelerator modules that we don't emulate.
However, since qemu isn't aware of it, its state is not synchronized
with KVM and therefore not migrated, which can be a problem.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
When running nested with KVM PR, ppc_set_compat() fails and QEMU crashes
because of "double free or corruption (!prev)". The crash happens because
error_report_err() has already called error_free().
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When a tlb instruction miss happen, rw is set to 0 at the bottom
of cpu_ppc_handle_mmu_fault which cause the MAS update function to miss
the SAS and TS bit in MAS6, MAS1 in booke206_update_mas_tlb_miss.
Just calling booke206_update_mas_tlb_miss with rw = 2 solve the issue.
Signed-off-by: KONRAD Frederic <frederic.konrad@adacore.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Currently abort handlers only work for the first test function
in a testcase, because the list of abort handlers is not properly
cleared when qtest_quit() is called.
qtest_quit() only deletes the kill_qemu_hook but doesn't completely
clear the abrt_hooks list. The effect is that abrt_hooks.is_setup is
never set to false and in a following test the abrt_hooks list is not
initialized and setup_sigabrt_handler() is not called.
One way to solve this is to clear the list in qtest_quit(), but
that means only asserts between qtest_start and qtest_quit will
be catched by the abort handler.
We can make abort handlers work in all cases if we always setup the
signal handler for SIGABRT in qtest_init.
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
End processing of messages when VHOST_USER_NONE
is received.
Without this we run into a vubr_panic() call and get
"PANIC: Unhandled request: 0"
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This patch fixes -netdev socket,fd= for UDP sockets
Currently -netdev socket,fd=<...> results in
qemu: error: specified mcastaddr "127.0.0.1" (0x7f000001) does not
contain a multicast address
qemu-system-x86_64: -netdev
socket,id=n1,fd=3: Device 'socket' could not be initialized
To fix these we need to allow specifying multicast and fd arguments
for the same netdev. With this the user can specify "-netdev
fd=3,mcast=<IP:port>"
Cc: Jason Wang <jasowang@redhat.com>
Fixes: 3d830459b1
Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
These days, many programs are including a bug-reporting address,
or better yet, a link to the project web site, at the tail of
their --help output. However, we were not very consistent at
doing so: only qemu-nbd and qemu-qa mentioned anything, with the
latter pointing to an individual person instead of the project.
Add a new #define that sets up a uniform string, mentioning both
bug reporting instructions and overall project details, and which
a downstream vendor could tweak if they want bugs to go to a
downstream database. Then use it in all of our binaries which
have --help output.
The canned text intentionally references http:// instead of https://
because our https website currently causes certificate errors in
some browsers. That can be tweaked later once we have resolved the
web site issued.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170803163353.19558-5-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
'amend' and 'create' were not listed alphabetically; hoist them
earlier. Separate the @end table block to make it easier to
copy-and-paste the addition of future sub-commands.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170803163353.19558-2-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Block layer patches for 2.10.0-rc2
# gpg: Signature made Tue 08 Aug 2017 14:56:15 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
block/nfs: fix mutex assertion in nfs_file_close()
qemu-iotests: Test reopen between read-only and read-write
qemu-io: Allow reopen read-write
block: Set BDRV_O_ALLOW_RDWR during rw reopen
block: Allow reopen rw without BDRV_O_ALLOW_RDWR
block: Fix order in bdrv_replace_child()
parallels: drop check that bdrv_truncate() is working
parallels: respect error code of bdrv_getlength() in allocate_clusters()
block: respect error code from bdrv_getlength in handle_aiocb_write_zeroes
vmdk: Fix error handling/reporting of vmdk_check
block/null: Remove 'filename' option
block: drop bdrv_set_key from BlockDriver
block/vhdx: check error return of bdrv_truncate()
block/vhdx: check error return of bdrv_flush()
block/vhdx: check for offset overflow to bdrv_truncate()
block/vhdx: check error return of bdrv_getlength()
quorum: Set sectors-count to 0 when reporting a flush error
qemu-iotests/109: Fix lock race condition
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Commit c096358e74 introduced assertion
checks for when qemu_mutex() functions are called without the
corresponding qemu_mutex_init() having initialized the mutex.
This uncovered a latent bug in qemu's nfs driver - in
nfs_client_close(), the NFSClient structure is overwritten with zeros,
prior to the mutex being destroyed.
Go ahead and destroy the mutex in nfs_client_close(), and change where
we call qemu_mutex_init() so that it is correctly balanced.
There are also a couple of memory leaks obscured by the memset, so this
fixes those as well.
Finally, we should be able to get rid of the memset(), as it isn't
necessary.
Cc: qemu-stable@nongnu.org
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This serves as a regression test for the bugs that were just fixed for
bdrv_reopen() between read-only and read-write mode.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
This allows qemu-iotests to test the switch between read-only and
read-write mode for block devices.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reopening an image should be consistent with opening it, so we should
set BDRV_O_ALLOW_RDWR for any image that is reopened read-write like in
bdrv_open_inherit().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
BDRV_O_ALLOW_RDWR is a flag that tells whether qemu can internally
reopen a node read-write temporarily because the user requested
read-write for the top-level image, but qemu decided that read-only is
enough for this node (a backing file).
bdrv_reopen() is different, it is also used for cases where the user
changed their mind and wants to update the options. There is no reason
to forbid making a node read-write in that case.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Commit 8ee03995 refactored the code incorrectly and broke the release of
permissions on the old BDS. Instead of changing the permissions to the
new required values after removing the old BDS from the list of
children, it only re-obtains the permissions it already had.
Change the order of operations so that the old BDS is removed again
before calculating the new required permissions.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
This would be actually strange and error prone. If truncate() nowadays
will fail, there is something fatally wrong. Let's check for that during
the actual work.
The only fallback case is when the file is not zero initialized. In this
case we should switch to preallocation via fallocate().
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Markus Armbruster <armbru@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Original idea beyond the code in question was the following: we have failed
to write zeroes with fallocate(FALLOC_FL_ZERO_RANGE) as the simplest
approach and via fallocate(FALLOC_FL_PUNCH_HOLE)/fallocate(0). We have the
only chance now: if the request comes beyond end of the file. Thus we
should calculate file length and respect the error code from that op.
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Markus Armbruster <armbru@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Errors from the callees must be captured and propagated to our caller,
ensure this for both find_extent() and bdrv_getlength().
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This option was only added to allow 'null-co://' and 'null-aio://' as
filenames, its value never served any actual purpose and was ignored.
Nevertheless it was accepted as '-drive driver=null,filename=foo'.
The correct way to enable the protocol prefixes (and that without adding
a useless -drive option) is implementing .bdrv_parse_filename. This is
what this patch does.
Technically, this is an incompatible change, but the null block driver
is only used for benchmarking, testing and debugging, and an option
without effect isn't likely to be used by anyone anyway, so no bad
effects are to be expected.
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
VHDX uses uint64_t types for most offsets, following the VHDX spec.
However, bdrv_truncate() takes an int64_t value for the truncating
offset. Check for overflow before calling bdrv_truncate().
While we are here, replace the bit shifting with QEMU_ALIGN_UP as well.
N.B.: For a compliant image this is not an issue, as the maximum VHDX
image size is defined per the spec to be 64TB.
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Calls to bdrv_getlength() were not checking for error. In vhdx.c, this
can lead to truncating an image file, so it is a definite bug. In
vhdx-log.c, the path for improper behavior is less clear, but it is best
to check in any case.
Some minor code movement of the log_guid intialization, as well.
Reported-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The QUORUM_REPORT_BAD event has fields to report the sector in which
the error was detected and the number of affected sectors starting
from that one. This is important for read and write errors, but not
for flush errors.
For flush errors the current code reports the total size of the disk
image. That is however not useful information in this case. Moreover,
the bdrv_getlength() call can fail, and there's no good way of
handling that failure.
Since we're reporting useless information and we cannot even guarantee
to do it in a consistent way, this patch changes the code to report 0
instead in all cases.
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A race condition is currently present between the clean up attempt of
the QEMU process and the execution of qemu-img. The actual (bad)
output is:
-Warning: Image size mismatch!
-Images are identical.
+qemu-img: Could not open '<build_dir>/tests/qemu-iotests/scratch/t.raw': Failed to get "consistent read" lock
+Is another process using the image?
A KILL signal is sent to the QEMU process, but qemu-img may begin to
run before the QEMU process is really gone. qemu-img will then
attempt to open the TEST_IMG file before it can secure a lock on it.
This attempts a more graceful shutdown, and waits for the QEMU process
to exit.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
virtio: fix for rc2
It turns out there's a way to setup SHPC on Q35: just put
a PCI to PCI bridge behind a DMI to PCI one. Our _OSC is
thus incorrect.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Mon 07 Aug 2017 22:39:20 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
cpu: add APIs to allocate/free CPU environment
hw/i386: allow SHPC for Q35 machine
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When emulating various SSE4.1 instructions such as pinsrd, the address
of a memory operand is computed without allowing for the 8-bit
immediate operand located after the memory operand, meaning that the
memory operand uses the wrong address in the case where it is
rip-relative. This patch adds the required rip_offset setting for
those instructions, so fixing some GCC test failures (13 in the gcc
testsuite in my GCC 6-based testing) when testing with a default CPU
setting enabling those instructions.
Signed-off-by: Joseph Myers <joseph@codesourcery.com>
Message-Id: <alpine.DEB.2.20.1708080041391.28702@digraph.polyomino.org.uk>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The LUN0 emulation is just that, an emulation for a non-existing
LUN0. So we should be returning LUN_NOT_SUPPORTED for any request
coming from any other LUN.
And we should be aborting unhandled commands with INVALID OPCODE,
not LUN NOT SUPPORTED.
Signed-off-by: Hannes Reinecke <hare@suse.com>
Message-Id: <1501835795-92331-4-git-send-email-hare@suse.de>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Building QEMU on fedora26 with the latest gcc package fails:
CC ppc64-softmmu/target/ppc/kvm.o
In file included from include/sysemu/hw_accel.h:16:0,
from target/ppc/kvm.c:31:
target/ppc/kvm.c: In function ‘kvmppc_booke_watchdog_enable’:
include/sysemu/kvm.h:449:35: error: ‘args_tmp[i]’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
cap.args[i] = args_tmp[i]; \
^
target/ppc/kvm.c: In function ‘kvmppc_set_papr’:
include/sysemu/kvm.h:449:35: error: ‘args_tmp[i]’ may be used uninitialized
in this function [-Werror=maybe-uninitialized]
cc1: all warnings being treated as errors
$ rpm -q gcc
gcc-7.1.1-3.fc26.ppc64le
The compiler should obviously optimize this code away when no extra
agument is passed to kvm_vm_enable_cap() and kvm_vcpu_enable_cap(),
but it doesn't. This bug should be fixed one day in gcc, but we can
also change our code pattern so that we don't hit the issue anymore.
We workaround this, by using memcpy() instead of open-coding the copy.
Signed-off-by: Greg Kurz <groug@kaod.org>
Message-Id: <150210580404.1343.7325713896658799315.stgit@bahia.lan>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This reverts commit a59629fcc6.
This is not needed anymore because the IOThread mutex is not
"magic" anymore (need not kick the CPU thread)and also because
fork callbacks are only enabled at the very beginning of
QEMU's execution.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Because of -daemonize, system mode QEMU sometimes needs to fork() and
keep RCU enabled in the child. However, there is a possible deadlock
with synchronize_rcu:
- the CPU thread is inside a RCU critical section and wants to take
the BQL in order to do MMIO
- the monitor thread, which is owning the BQL, calls rcu_init_lock
which tries to take the rcu_sync_lock
- the call_rcu thread has taken rcu_sync_lock in synchronize_rcu, but
synchronize_rcu needs the CPU thread to end the critical section
before returning.
This cannot happen for user-mode emulation, because it does not have
a BQL.
To fix it, assume that system mode QEMU only forks in preparation for
exec (except when daemonizing) and disable pthread_atfork as soon as
the double fork has happened.
Reported-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Tested-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Unmask previously masked SHPC feature in _OSC method.
Signed-off-by: Aleksandr Bezzubikov <zuban32s@gmail.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
There are trace probes in bdrv_co_readv|writev, however, the
block drivers are being gradually moved over to using the
bdrv_co_preadv|pwritev functions instead. As a result some
block drivers miss the current probes. Move the probes
into bdrv_co_preadv|pwritev instead, so that they are triggered
by more (all?) I/O code paths.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170804105036.11879-1-berrange@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
MIPS patches 2017-08-03
Changes:
KVM T&E segment support for TCG
malta: leave space for the bootmap after the initrd
Apply CP0.PageMask before writing into TLB entry
Fix fallout from indirect branch optimisation
# gpg: Signature made Thu 03 Aug 2017 15:32:59 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170803:
target/mips: Fix RDHWR CC with icount
target/mips: Drop redundant gen_io_start/stop()
target/mips: Use BS_EXCP where interrupts are expected
target-mips: apply CP0.PageMask before writing into TLB entry
mips: Add KVM T&E segment support for TCG
mips: Improve segment defs for KVM T&E guests
mips/malta: leave space for the bootmap after the initrd
target-mips: Don't stop on [d]mtc0 DESAVE/KScratch
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
virtio: fix for rc2
Looks like the constant stream of additions of vhost-user devices is a
problem for some people who are concerned about external connections
from qemu. A per-device flag seems like an overkill, but a single
configure flag seems like a sane way to support that, and it looks like
we need to do it before the release.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Thu 03 Aug 2017 13:57:57 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
build-sys: add --disable-vhost-user
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For a 64-bit ILP32 host, aligning to sizeof(long) is not enough.
Guess the minimum for any host is 8, as that covers uint64_t.
Qemu doesn't use a host long double or host vectors, except in
extremely limited circumstances.
Fixes a bus error for a sparc v8plus host.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Patch 85aa80813d changed the IF emitting the TST instruction,
but failed to change the ?: converting CMP to CMPEQ, so the
result of the TST is ignored.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Learn to compile out vhost-user (net, scsi & upcoming users). Keep it
enabled by default on non-win32, that is assumed to be POSIX. Fail if
trying to enable it on win32.
When trying to make a vhost-user netdev, it gives the following error:
-netdev vhost-user,id=foo,chardev=chr-test: Parameter 'type' expects a netdev backend type
And similar error with the HMP/QMP monitors.
While at it, rename CONFIG_VHOST_NET_TEST CONFIG_VHOST_USER_NET_TEST
since it's a vhost-user specific variable.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
While parsing dhcp options string in 'dhcp_decode', if an options'
length 'len' appeared towards the end of 'bp_vend' array, ensuing
read could lead to an OOB memory access issue. Add check to avoid it.
This is CVE-2017-11434.
Reported-by: Reno Robert <renorobert@gmail.com>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
With "-netdev user,id=net0,dns=1.2.3.4"
error was:
qemu-system-i386: -netdev user,id=net0,dns=1.2.3.4: Device 'user' could not be initialized
Error is now:
qemu-system-i386: -netdev user,id=net0,dns=1.2.3.4: DNS doesn't belong to network
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
With pseries machine type a negative core-id is not managed properly:
-1 gives an inaccurate error message ("core -1 already populated"),
-2 crashes QEMU (core dump)
As it seems a negative value is invalid for any architecture,
instead of checking this in spapr_core_pre_plug() I think it's better
to check this in the generic part, core_prop_set_core_id()
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170802103259.25940-1-lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
RDHWR CC reads the CPU timer like MFC0 CP0_Count, so with icount enabled
it must set can_do_io while it calls the helper to avoid the "Bad icount
read" error. It should also break out of the translation loop to ensure
that timer interrupts are immediately handled.
Fixes: 2e70f6efa8 ("Add instruction counter.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
DMTC0 CP0_Cause does a redundant gen_io_start() and gen_io_end() pair,
even though this is done for all DMTC0 operations outside of the switch
statement. Remove these redundant calls.
Fixes: 5dc5d9f055 ("mips: more fixes to the MIPS interrupt glue logic")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Commit e350d8ca3a ("target/mips: optimize indirect branches") made
indirect branches able to directly find the next TB and jump straight to
it without breaking out of translated code and going around the main
execution loop. This breaks the assumption in target/mips/translate.c
that BS_STOP is sufficient to cause pending interrupts to be handled,
since interrupts are only checked in the main loop.
Fix a few of these assumptions by using gen_save_pc to update the saved
PC and using BS_EXCP instead of BS_STOP:
- [D]MFC0 CP0_Count may trigger a timer interrupt which should be
immediately handled.
- [D]MTC0 CP0_Cause may trigger an interrupt (but in fact translation
was only even being stopped in the DMTC0 case).
- [D]MTC0 CP0_<any> when icount is used is assumed could potentially
cause interrupts.
- EI may trigger an interrupt which was pending. I specifically hit
this case when running KVM nested in mipsel-softmmu. A timer
interrupt while the 2nd guest was executing is caught by KVM which
switches back to the normal Linux exception base and re-enables
interrupts with EI. Since the above commit QEMU doesn't leave
translated code until the nested KVM has already restored the KVM
exception base and returned to the 2nd guest, at which point it is
too late to check for pending interrupts and it gets stuck in an
infinite loop of unhandled interrupts.
Something similar was needed for ARM in commit b29fd33db5
("target/arm: use DISAS_EXIT for eret handling").
Fixes: e350d8ca3a ("target/mips: optimize indirect branches")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Richard Henderson <rth@twiddle.net>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
MIPS KVM trap & emulate guest kernels have a different segment layout
compared with traditional MIPS kernels, to allow both the user and
kernel code to run from the user address segment without repeatedly
trapping to KVM.
QEMU currently supports this layout only for KVM, but its sometimes
useful to be able to run these kernels in QEMU on a PC, so enable it for
TCG too.
This also paves the way for MIPS KVM VZ support (which uses the normal
virtual memory layout) by abstracting whether user mode kernel segments
are in use.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
[Yongbok Kim:
minor change]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Improve the segment definitions used by get_physical_address() to yield
target_ulong types, e.g. 0xffffffff80000000 instead of 0x80000000. This
is in preparation for enabling emulation of MIPS KVM T&E segments in TCG
MIPS targets, which unlike KVM could potentially have 64-bit
target_ulong. In such a case the offset guest KSEG0 address ends up at
e.g. 0x000000008xxxxxxx instead of 0xffffffff8xxxxxxx.
This also allows the casts to int32_t that force sign extension to be
removed, which removes any confusion due to relational comparison of
unsigned (target_ulong) and signed (int32_t) types.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: kvm@vger.kernel.org
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Since commit 9768e2abf7 the initrd is loaded at the end of the low
memory to avoid clash for the kernel relocation when kaslr is used.
However this in turn conflicts with the bootmap memory that the kernel
tries to place after initrd, but in low memory. The bootmap spans the
whole usable physical address space. The machine can have at most 2GiB
of memory, 256MiB of low memory mapped at 0x00000000, and 1792MiB of
high memory mapped at 0x90000000. The biggest bootmap therefore
corresponds to the adresses 0x00000000 -> 0xffffffff, which at 1 bit
per 4kiB page corresponds to 128kiB in memory.
Therefore reserve 128kiB after the initrd.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Writing to the MIPS DESAVE register (and now the KScratch registers)
will stop translation, supposedly due to risk of execution mode
switches. However these registers are basically RW scratch registers
with no side effects so there is no risk of them triggering execution
mode changes.
Drop the bstate = BS_STOP for these registers for both mtc0 and dmtc0.
Fixes: 7a387fffce ("Add MIPS32R2 instructions, and generally straighten out the instruction decoding. This is also the first percent towards MIPS64 support.")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
testchardev2 is not a valid chardev id here. Use testchardev1
instead which has been created with chardev-add right before
the 'chardev-send-break' line.
And while we're at it, add the test-hmp.c file to the MAINTAINERS
file, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1501149097-19071-1-git-send-email-thuth@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Migration pull 2017-08-02
Just minor fixes for 2.10
# gpg: Signature made Wed 02 Aug 2017 14:55:21 BST
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20170802a:
io: fix qio_channel_socket_accept err handling
migration: fix comment disorder in RAMState
migration: fix small leaks
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When accept failed, we should setup errp with the reason. More
importantly, the caller may assume errp be non-NULL when error happens,
and not setting the errp may crash QEMU.
At the same time, move the trace_qio_channel_socket_accept_fail() after
the if check on EINTR. Two reasons:
1. when EINTR happened, it's not really a fault (we should just try
again), so we should not log with an "accept failure".
2. trace_*() functions may overwrite errno, then the old errno will be
missing. We need to either check errno before trace_*() calls, or
reserve the errno.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1501666880-10159-3-git-send-email-peterx@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Spotted thanks to valgrind and tests/device-introspect-test:
==11711== 1 bytes in 1 blocks are definitely lost in loss record 6 of 14,537
==11711== at 0x4C2EB6B: malloc (vg_replace_malloc.c:299)
==11711== by 0x1E0CDBD8: g_malloc (gmem.c:94)
==11711== by 0x1E0E696E: g_strdup (gstrfuncs.c:363)
==11711== by 0x695693: migration_instance_init (migration.c:2226)
==11711== by 0x717C4B: object_init_with_type (object.c:344)
==11711== by 0x717E80: object_initialize_with_type (object.c:375)
==11711== by 0x7182EB: object_new_with_type (object.c:483)
==11711== by 0x718328: object_new (object.c:493)
==11711== by 0x4B8A29: qmp_device_list_properties (qmp.c:542)
==11711== by 0x4A9561: qmp_marshal_device_list_properties (qmp-marshal.c:1425)
==11711== by 0x819D4A: do_qmp_dispatch (qmp-dispatch.c:104)
==11711== by 0x819E82: qmp_dispatch (qmp-dispatch.c:131)
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170801160419.14180-1-marcandre.lureau@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
pc, acpi, virtio: fixes, test speedup for rc1
Some fixes all over the place. Notably vhost-user gained a new message
to set endian-ness. Borderline for 2.10 but seems to be the only way to
fix legacy guests. Also pc tests are run on kvm now. Not a fix at all
but doesn't touch qemu itself, so I merged it since I had to run these a
lot and I just got tired of waiting for these to finish.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Tue 01 Aug 2017 22:36:47 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
pc: acpi: force FADT rev1 for 440fx based machine types
pc: make 'pc.rom' readonly when machine has PCI enabled
vhost-user: fix watcher need be removed when vhost-user hotplug
tests/bios-tables-test: Compiler warning fix
accel: cleanup error output
intel_iommu: use access_flags for iotlb
intel_iommu: fix iova for pt
vhost-user: fix legacy cross-endian configurations
vhost: fix a memory leak
tests: switch pxe and vm gen id tests to use kvm
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
w2k used to boot on QEMU until revision of FADT has
been bumped to rev3
(commit 77af8a2b hw/i386: Use Rev3 FADT (ACPI 2.0) instead of Rev1 to improve guest OS support.)
Keep PC machine at rev1 to remain compatible and Q35
at rev3 where w2k isn't supported anyway so OSX could
run as well.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Tested-by: John Arbuckle <programmingkidx@gmail.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
looking at bios ROM mapping in QEMU it seems that only isapc
(i.e. not PCI enabled machine) requires ROM being mapped as
RW in other cases BIOS is mapped as RO. Do the same for option
ROM 'pc.rom' when machine has PCI enabled.
As useful side-effect pc.rom MemoryRegion stops being
put in vhost memory map (filtered out by vhost_section()),
which reduces number of entries by 1.
Coincidentally it fixes migration failure reported in
"[PATCH V2] vhost: fix a migration failed because of vhost region merge"
where following destination CLI with /sys/module/vhost/parameters/max_mem_regions = 8
export DIMMSCOUNT=6
QEMU -enable-kvm \
-netdev type=tap,id=guest0,vhost=on,script=no,vhostforce \
-device virtio-net-pci,netdev=guest0 \
-m 256,slots=256,maxmem=2G \
`i=0; while [ $i -lt $DIMMSCOUNT ]; do echo \
"-object memory-backend-ram,id=m$i,size=128M \
-device pc-dimm,id=d$i,memdev=m$i"; i=$(($i + 1)); \
done`
will fail to startup with error:
"-device pc-dimm,id=d5,memdev=m5: a used vhost backend has no free memory slots left"
while it's possible to add the 6th DIMM during hotplug
on source.
Issue is caused by the fact that number of entries in vhost map
is bigger on 1 entry, when -device is processed, than
after guest boots up, and that offending entry belongs to
'pc.rom', it's not like vhost intends to do IO in ROM range
so making it RO hides region from vhost and makes number
of entries in vhost memory map at -device/machine_done time
match number of entries after guest boots.
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reported-by: Peng Hao <peng.hao2@zte.com.cn>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
"nc" is freed after hotplug vhost-user, but the watcher is not removed.
The QEMU crash when the watcher access the "nc" when socket disconnects.
Program received signal SIGSEGV, Segmentation fault.
#0 object_get_class (obj=obj@entry=0x2) at qom/object.c:750
#1 0x00007f9bb4180da1 in qemu_chr_fe_disconnect (be=<optimized out>) at chardev/char-fe.c:372
#2 0x00007f9bb40d1100 in net_vhost_user_watch (chan=<optimized out>, cond=<optimized out>, opaque=<optimized out>) at net/vhost-user.c:188
#3 0x00007f9baf97f99a in g_main_context_dispatch () from /usr/lib64/libglib-2.0.so.0
#4 0x00007f9bb41d7ebc in glib_pollfds_poll () at util/main-loop.c:213
#5 os_host_main_loop_wait (timeout=<optimized out>) at util/main-loop.c:261
#6 main_loop_wait (nonblocking=nonblocking@entry=0) at util/main-loop.c:515
#7 0x00007f9bb3e266a7 in main_loop () at vl.c:1917
#8 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4786
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
gcc 7.1.1 in fedora 26 moans about the:
tables = g_new0(uint32_t, tables_nr)
because it can't convince itself that tables_nr is positive.
This is fallout from g_assert_cmpint no longer necessarily being
no-return; replace it with a plain g_assert.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Only emit "XXX accelerator not found", if there are not
further accelerators listed. eg
accel=kvm:tcg
doesn't print a "KVM accelerator not found" warning
when it falls back to tcg, but a
accel=kvm
prints a warning, since no fallback is given.
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
It was cached by read/write separately. Let's merge them.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
IOMMUTLBEntry.iova is returned incorrectly on one PT path (though mostly
we cannot really trigger this path, even if we do, we are mostly
disgarding this value, so it didn't break anything). Fix it by
converting the VTD_PAGE_MASK into the correct definition
VTD_PAGE_MASK_4K, then remove VTD_PAGE_MASK.
Fixes: b93130 ("intel_iommu: cleanup vtd_{do_}iommu_translate()")
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Currently, vhost-user does not implement any means for notifying the
backend about guest endianess. This commit introduces a new message
called VHOST_USER_SET_VRING_ENDIAN which is analogous to the ioctl()
called VHOST_SET_VRING_ENDIAN used for kernel vhost backends. Such
message is necessary for backends supporting legacy (pre-1.0) virtio
devices running in big-endian guests.
Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
Signed-off-by: Mike Cui <cui@nutanix.com>
When skipping implicit nodes in bdrv_block_device_info(), we know that
bs0 is always non-NULL; initially, because it's taken from a BdrvChild
and a BdrvChild never has a NULL bs, and after the first iteration
because implicit nodes always have a backing file.
Remove the NULL check and add an assertion that the implicit node does
indeed have a backing file.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
qemu-iotests 059 left a whole lot of image files behind in the scratch
directory because VMDK creates additional files for extents and cleaning
them up requires the original image intact (it parses qemu-img info
output to find all extent files), but the image overwrote it many times
like it works for all other image formats.
In addition, _use_sample_img overwrites the TEST_IMG variable, causing
new images created afterwards to reuse the name of the sample file
rather than the usual t.IMGFMT.
This patch adds an intermediate _cleanup_test_img after each subtest
that created an image file with additional extent files, and also after
each use of a sample image. _cleanup_test_img is also changed so that it
resets TEST_IMG after a sample image is cleaned up.
Note that this test was failing before this commit and continues to do
so after it. This failure was introduced in commit 9877860 ('block/vmdk:
Report failures in vmdk_read_cid()') and needs to be dealt with
separately.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 063 left t.raw.raw1 behind in the scratch directory because
it used the wrong suffix. Make sure to clean it up after completing the
test.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 162 left qemu-nbd.pid behind in the scratch directory, and
potentially a file called '42' in the current directory. Make sure to
clean it up after completing the tests.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 153 left t.qcow2.c behind in the scratch directory. Make
sure to clean it up after completing the tests.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 141 attempted to use brace expansion to remove all images
with a single command. However, for this to work, the braces shouldn't
be quoted.
With this fix, the tests correctly cleans up its scratch images.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 074 and 179 left a blkdebug.conf behind in the scratch
directory. Make sure to clean up after completing the tests.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qemu-iotests 041 left quorum_snapshot.img and target.img behind in the
scratch directory. Make sure to clean up after completing the tests.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
bdrv_open_driver() is called in two places, bdrv_new_open_driver() and
bdrv_open_common(). In the latter, failure cleanup in is in its caller,
bdrv_open_inherit(), which unrefs the bs->file of the failed driver open
if it exists.
Let's move the bs->file cleanup to bdrv_open_driver() to take care of
all callers and do not set bs->drv to NULL unless the driver's open
function failed. When bs is destroyed by removing its last reference, it
calls bdrv_close() which checks bs->drv to perform the needed cleanups
and also call the driver's close function. Since it cleans up options
and opaque we must take care not leave dangling pointers.
The error paths in bdrv_open_driver() are now two:
If open fails, drv->bdrv_close() should not be called. Unref the child
if it exists, free what we allocated and set bs->drv to NULL. Return the
error and let callers free their stuff.
If open succeeds but we fail after, return the error and let callers
unref and delete their bs, while cleaning up their allocations.
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In some error paths it is possible to QDECREF a freed dangling
explicit_options, resulting in a heap overflow crash. For example
bdrv_open_inherit()'s fail unrefs it, then calls bdrv_unref which calls
bdrv_close which also unrefs it.
Signed-off-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The new test 190 ensures we don't regress back to an infinite loop when
measuring the size of a 2T+ qcow2 image. I did not append to test 178,
because that test is also designed to run with format 'raw'; also, this
gives us some coverage of the measure command under the quick group.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We had a bug for multiple releases where dirty-bitmap count was
documented in bytes but reported in sectors; enhance the testsuite
to add coverage of DirtyBitmapInfo to ensure we do not regress again.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Without redirecting qemu's stderr to stdout, _filter_qemu will not apply
to warnings. This results in $QEMU_PROG not being replaced by QEMU_PROG
which is not great if your qemu executable is not called
qemu-system-x86_64 (e.g. qemu-system-i386).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
On one hand, the _make_test_img invocation for creating the target image
was missing a -u because its backing file is not supposed to exist at
that point.
On the other hand, nobody noticed probably because the backing file is
created later on and _cleanup failed to remove it: The quotation marks
were misplaced so bash tried to delete a file literally called
"$TEST_IMG{,.target}..." instead of performing brace expansion. Thus, the
files stayed around after the first run and qemu-img create did not
complain about a missing backing file on any run but the first.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
In some cases, the guest can observe the wrong ordering of UIP and
interrupts. This can happen if the VCPU exit is timed like this:
iothread VCPU
... wait for interrupt ...
t-100ns read register A
t wake up, take BQL
t+100ns update_in_progress
return false
return UIP=0
trigger interrupt
The interrupt is late; the VCPU expected the falling edge of UIP to
happen after the interrupt. update_in_progress is already trying to
cover this case by latching UIP if the timer is going to fire soon,
and the fix is documented in the commit message for commit 56038ef623
("RTC: Update the RTC clock only when reading it", 2012-09-10). It
cannot be tested with qtest, because its timing of interrupts vs. reads
is exact.
However, the implementation was incorrect because UIP cmos_ioport_read
cleared register A instead of leaving that to rtc_update_timer. Fixing
the implementation of cmos_ioport_read to match the commit message,
however, breaks the "uip-stuck" test case from the previous patch.
To fix it, skip update timer optimizations if UIP has been latched.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Qemu_savevm_state_cleanup takes about 300ms in my ram migration tests
with a 8U24G vm(20G is really occupied), the main cost comes from
KVM_SET_USER_MEMORY_REGION ioctl when mem.memory_size = 0 in
kvm_set_user_memory_region. In kmod, the main cost is
kvm_zap_obsolete_pages, which traverses the active_mmu_pages list to
zap the unsync sptes.
It can be optimized by delaying memory_global_dirty_log_stop to the next
vm_start.
Changes v2->v3:
- NULL VMChangeStateHandler if it is deleted and protect the scenario
of nested invocations of memory_global_dirty_log_start/stop [Paolo]
Changes v1->v2:
- create a VMChangeStateHandler in memory.c to reduce the coupling [Paolo]
Signed-off-by: Jay Zhou <jianjay.zhou@huawei.com>
Message-Id: <1501237733-2736-1-git-send-email-jianjay.zhou@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Clang static analyzer reports a memory leak. Actually, the allocated
memory escapes here:
record->attribute_list[record->attributes].pair = data;
but clang is correct that the memory might leak if len is zero. We
know it isn't; assert that it is the case.
The craziness doesn't end there. The memory is freed by
bt_l2cap_sdp_close_ch:
g_free(sdp->service_list[i].attribute_list->pair);
which actually should have been written like this:
g_free(sdp->service_list[i].attribute_list[0].pair);
The attribute_list is sorted with qsort; but indeed the first
entry of attribute_list should point to "data" even after the qsort,
because the first record has id SDP_ATTR_RECORD_HANDLE, whose
numeric value is zero.
But hang on. The qsort function is
static int sdp_attributeid_compare(
const struct sdp_service_attribute_s *a,
const struct sdp_service_attribute_s *b)
{
return (int) b->attribute_id - a->attribute_id;
}
but no one ever writes attribute_id. So it only works if qsort is
stable, and who knows what else is broken, but we can fix it by
setting attribute_id in the while loop.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 04bf2526ce (exec: use
qemu_ram_ptr_length to access guest ram) start using qemu_ram_ptr_length
instead of qemu_map_ram_ptr, but when used with Xen, the behavior of
both function is different. They both call xen_map_cache, but one with
"lock", meaning the mapping of guest memory is never released
implicitly, and the second one without, which means, mapping can be
release later, when needed.
In the context of address_space_{read,write}_continue, the ptr to those
mapping should not be locked because it is used immediatly and never
used again.
The lock parameter make it explicit in which context qemu_ram_ptr_length
is called.
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Message-Id: <20170726165326.10327-1-anthony.perard@citrix.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu call kvm_get_vcpu_events, and kernel return sipi_vector always
0, never valid when reporting to user space. But when qemu calls
kvm_put_vcpu_events will make sipi_vector in kernel be 0. This will
accidently modify sipi_vector when sipi_vector in kernel is not 0.
Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Reviewed-by: Liu Yi <liu.yi24@zte.com.cn>
Message-Id: <1500047256-8911-1-git-send-email-peng.hao2@zte.com.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The deprecation of features in QEMU is totally adhoc currently,
with no way for the user to get a list of what is deprecated
in each release. This adds an appendix to the doc that records
when each deprecation was made and provides text explaining
what to use instead, if anything.
Since there has been no formal policy around removal of deprecated
features in the past, any deprecations prior to 2.10.0 are to be
treated as if they had been made at the 2.10.0 release. Thus the
earliest that existing deprecations will be deleted is the start
of the 2.12.0 cycle.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170725113638.7019-1-berrange@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Only emit "XXX accelerator not found", if there are not
further accelerators listed. eg
accel=kvm:tcg
doesn't print a "KVM accelerator not found" warning
when it falls back to tcg, but a
accel=kvm
prints a warning, since no fallback is given.
Suggested-by: Daniel P. Berrange <berrange@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Message-Id: <20170717144527.24534-1-lvivier@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There's a rare exit seg if the guest is accessing
IO during exit.
It's always hitting the atomic_inc(&bs->in_flight) with a NULL
bs. This was added recently in 99723548 but I don't see it
as the cause.
Flip vl.c around so we pause the cpus before closing the block devices,
that way we shouldn't have anything trying to access them when
they're gone.
This was originally Red Hat bz https://bugzilla.redhat.com/show_bug.cgi?id=1451015
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reported-by: Cong Li <coli@redhat.com>
--
This is a very rare race, I'll leave it running in a loop to see if
we hit anything else and to check this really fixes it.
I do worry if there are other cases that can trigger this - e.g.
hot-unplug or ejecting a CD.
Message-Id: <20170713190116.21608-1-dgilbert@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
We are malloc'ing a QString and spending CPU cycles on converting a
QObject to string, just for the sake of sticking the string in the trace
message. Wasted when we aren't tracing. Avoid that.
[Commit message and description suggested by Markus Armbruster to
provide more detail about the rationale for this patch.
Use trace_event_get_state_backends() instead of trace_event_get_state()
to honor DTrace/UST backend dstates.
--Stefan]
Signed-off-by: Denis V. Lunev <den@openvz.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20170725143923.11241-1-den@openvz.org
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Lluís Vilanova <vilanova@ac.upc.edu>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The only exception are groups of numers separated by symbols
'.', ' ', ':', '/', like 'ab.09.7d'.
This patch is made by the following:
> find . -name trace-events | xargs python script.py
where script.py is the following python script:
=========================
#!/usr/bin/env python
import sys
import re
import fileinput
rhex = '%[-+ *.0-9]*(?:[hljztL]|ll|hh)?(?:x|X|"\s*PRI[xX][^"]*"?)'
rgroup = re.compile('((?:' + rhex + '[.:/ ])+' + rhex + ')')
rbad = re.compile('(?<!0x)' + rhex)
files = sys.argv[1:]
for fname in files:
for line in fileinput.input(fname, inplace=True):
arr = re.split(rgroup, line)
for i in range(0, len(arr), 2):
arr[i] = re.sub(rbad, '0x\g<0>', arr[i])
sys.stdout.write(''.join(arr))
=========================
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Message-id: 20170731160135.12101-5-vsementsov@virtuozzo.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
In trace format '#' flag of printf is forbidden. Fix it to '0x%'.
This patch is created by the following:
check that we have a problem
> find . -name trace-events | xargs grep '%#' | wc -l
56
check that there are no cases with additional printf flags before '#'
> find . -name trace-events | xargs grep "%[-+ 0'I]+#" | wc -l
0
check that there are no wrong usage of '#' and '0x' together
> find . -name trace-events | xargs grep '0x%#' | wc -l
0
fix the problem
> find . -name trace-events | xargs sed -i 's/%#/0x%/g'
[Eric Blake noted that xargs grep '%[-+ 0'I]+#' should be xargs grep
"%[-+ 0'I]+#" instead so the shell quoting is correct.
--Stefan]
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20170731160135.12101-3-vsementsov@virtuozzo.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Code that checks dstate is unaware of SystemTap and LTTng UST dstate, so
the following trace event will not fire when solely enabled by SystemTap
or LTTng UST:
if (trace_event_get_state(TRACE_MY_EVENT)) {
str = g_strdup_printf("Expensive string to generate ...",
...);
trace_my_event(str);
g_free(str);
}
Add trace_event_get_state_backends() to fetch backend dstate. Those
backends that use QEMU dstate fetch it as part of
generate_h_backend_dstate().
Update existing trace_event_get_state() callers to use
trace_event_get_state_backends() instead.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170731140718.22010-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
QEMU keeps track of trace event enabled/disabled state and provides
monitor commands to inspect and modify the "dstate". SystemTap and
LTTng UST maintain independent enabled/disabled states for each trace
event, the other backends rely on QEMU dstate.
Introduce a new per-event macro that combines backend-specific dstate
like this:
#define TRACE_MY_EVENT_BACKEND_DSTATE() ( \
QEMU_MY_EVENT_ENABLED() || /* SystemTap */ \
tracepoint_enabled(qemu, my_event) /* LTTng UST */ || \
false)
This will be used to extend trace_event_get_state() in the next patch.
[Daniel Berrange pointed out that QEMU_MY_EVENT_ENABLED() must be true
by default, not false. This way events will fire even if the DTrace
implementation does not implement the SystemTap semaphores feature.
Ubuntu Precise uses lttng-ust-dev 2.0.2 which does not have
tracepoint_enabled(), so we need a compatibility wrapper to keep Travis
builds passing.
--Stefan]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 20170731140718.22010-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
fixup! trace: add TRACE_<event>_BACKEND_DSTATE()
The simpletrace compatibility code for systemtap creates a
function and some global variables for mapping to event ID
numbers. We generate multiple -simpletrace.stp files though,
one per target and systemtap considers functions & variables
to be globally scoped, not per file. So if trying to use the
simpletrace compat probes, systemtap will complain:
# stap -e 'probe qemu.system.arm.simpletrace.visit_type_str { print( "hello")}'
semantic error: conflicting global variables: identifier 'event_name_to_id_map' at /usr/share/systemtap/tapset/qemu-aarch64-simpletrace.stp:3:8
source: global event_name_to_id_map
^
identifier 'event_name_to_id_map' at /usr/share/systemtap/tapset/qemu-system-arm-simpletrace.stp:3:8
source: global event_name_to_id_map
^
WARNING: cross-file global variable reference to identifier 'event_name_to_id_map' at /usr/share/systemtap/tapset/qemu-system-arm-simpletrace.stp:3:8 from: identifier 'event_name_to_id_map' at /usr/share/systemtap/tapset/qemu-aarch64-simpletrace.stp:8:21
source: if (!([name] in event_name_to_id_map)) {
^
WARNING: cross-file global variable reference to identifier 'event_next_id' at /usr/share/systemtap/tapset/qemu-system-arm-simpletrace.stp:4:8 from: identifier 'event_next_id' at :9:38
source: event_name_to_id_map[name] = event_next_id
^
We already have a string used to prefix probe names, so just
replace '.' with '_' to get a function / variable name prefix
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170728133657.5525-1-berrange@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
target-arm queue:
* fix broken properties on MPS2 SCC device
* fix MPU trace handling of write vs exec
* fix MPU M profile bugs:
- not handling system space or PPB region correctly
- not resetting state
- not migrating MPU_RNR
# gpg: Signature made Mon 31 Jul 2017 13:21:40 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20170731:
hw/mps2_scc: fix incorrect properties
target/arm: Migrate MPU_RNR register state for M profile cores
target/arm: Move PMSAv7 reset into arm_cpu_reset() so M profile MPUs get reset
target/arm: Rename cp15.c6_rgnr to pmsav7.rnr
target/arm: Don't allow guest to make System space executable for M profile
target/arm: Don't do MPU lookups for addresses in M profile PPB region
target/arm: Correct MPU trace handling of write vs execute
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts commit bc658e4a2e.
Some versions of gcc warn about this:
linux-user/syscall.c: In function ‘do_ioctl_rt’:
linux-user/syscall.c:5577:37: error: ‘host_rt_dev_ptr’ may be used uninitialized in this function [-Werror=uninitialized]
and in particular the Travis builds fail; they use
gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3.
Revert the change to fix the travis builds.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The PMSAv7 region number register is migrated for R profile
cores using the cpreg scheme, but M profile doesn't use
cpregs, and so we weren't migrating the MPU_RNR register state
at all. Fix that by adding a migration subsection for the
M profile case.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1501153150-19984-6-git-send-email-peter.maydell@linaro.org
When the PMSAv7 implementation was originally added it was for R profile
CPUs only, and reset was handled using the cpreg .resetfn hooks.
Unfortunately for M profile cores this doesn't work, because they do
not register any cpregs. Move the reset handling into arm_cpu_reset(),
where it will work for both R profile and M profile cores.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1501153150-19984-5-git-send-email-peter.maydell@linaro.org
Almost all of the PMSAv7 state is in the pmsav7 substruct of
the ARM CPU state structure. The exception is the region
number register, which is in cp15.c6_rgnr. This exception
is a bit odd for M profile, which otherwise generally does
not store state in the cp15 substruct.
Rename cp15.c6_rgnr to pmsav7.rnr accordingly.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1501153150-19984-4-git-send-email-peter.maydell@linaro.org
The M profile PMSAv7 specification says that if the address being looked
up is in the PPB region (0xe0000000 - 0xe00fffff) then we do not use
the MPU regions but always use the default memory map. Implement this
(we were previously behaving like an R profile PMSAv7, which does not
special case this).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1501153150-19984-2-git-send-email-peter.maydell@linaro.org
Correct off-by-one bug in the PSMAv7 MPU tracing where it would print
a write access as "reading", an insn fetch as "writing", and a read
access as "execute".
Since we have an MMUAccessType enum now, we can make the code clearer
in the process by using that rather than the raw 0/1/2 values.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1500906792-18010-1-git-send-email-peter.maydell@linaro.org
When this file was rewritten/renamed in fdee2025dd,
a reference path was not updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs/ to docs/devel/ on ac06724a71,
a reference path was not updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs/ to docs/devel/ on ac06724a71,
no references were updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs/ to docs/devel/ on ac06724a71,
a couple of references were not updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs to docs/interop on ac06724a71,
a couple of references were not updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs to docs/interop on d59157ea05,
a reference path was not updated.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
With the move of some docs to docs/interop on d59157e, a couple of
references were not updated.
Signed-off-by: Cleber Rosa <crosa@redhat.com>
[PMD: fixed a typo and another reference of docs/interop/qmp-spec.txt]
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
thunk.c:91:32: warning: Call to 'malloc' has an allocation size of 0 bytes
se->field_offsets[i] = malloc(nb_fields * sizeof(int));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
linux-user/syscall.c:1627:35: warning: 1st function call argument is an uninitialized value
target_saddr->sa_family = tswap16(addr->sa_family);
^~~~~~~~~~~~~~~~~~~~~~~~
linux-user/syscall.c:1629:25: warning: The left operand of '==' is a garbage value
if (addr->sa_family == AF_NETLINK && len >= sizeof(struct sockaddr_nl)) {
~~~~~~~~~~~~~~~ ^
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
not hit since 2009! :)
linux-user/elfload.c:1102:20: warning: Out of bound memory access (access exceeds upper limit of memory block)
(*regs[i]) = tswap32(env->gregs[i]);
~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
So we have sizeof(struct in6_address) != sizeof(uintptr_t)
and Clang > Coverity on this, see 4555ca6816 :)
net/eth.c:426:30: warning: The code calls sizeof() on a pointer type. This can produce an unexpected result
return bytes_read == sizeof(dst_addr);
^ ~~~~~~~~~~
net/eth.c:475:34: warning: The code calls sizeof() on a pointer type. This can produce an unexpected result
return bytes_read == sizeof(src_addr);
^ ~~~~~~~~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Extract the (correct) cleaning code as a new function vnc_free_addresses() then
use it to remove the memory leaks.
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Screwed up in commit 3a55fc0f, v2.6.0.
If qemu_chr_fe_read_all() returns -EINTR the do {} statement continues and the
n accumulator used to complete reads upto sizeof(msg) is decremented by 4 (the
value of EINTR on Linux).
To avoid that, use simpler if() statements and continue if EINTR occured.
hw/misc/ivshmem.c:650:14: warning: Loss of sign in implicit conversion
} while (n < sizeof(msg));
^
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
since a negative value means it errored.
hw/core/loader.c:149:9: warning: Loss of sign in implicit conversion
if (size > max_sz) {
^~~~
hw/core/loader.c:171:9: warning: Loss of sign in implicit conversion
if (size > memory_region_size(mr)) {
^~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alistair Francis <alistair.francis@xilinx.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This allow a one liner from fresh repository clone, i.e.:
./configure && make -j check-qtest-aarch64
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Starting Qemu with "qemu-system-tricore -nographic -M tricore_testboard -S"
and entering "x 0" at the monitor prompt leads to Segmentation fault.
This happens because tricore_cpu_get_phys_page_debug() is not implemented
yet, this is a temporary workaround to avoid the crash.
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
If slirp is disabled, it will fail with:
qemu-system-x86_64: -netdev user,id=qtest-bn0: Parameter 'type' expects a netdev backend type
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Currently get_maintainers.pl claims that the configure script is
maintained by Kamil:
$ scripts/get_maintainer.pl -f configure
Kamil Rytarowski <kamil@netbsd.org> (maintainer:NETBSD)
qemu-devel@nongnu.org (open list:All patches CC here)
This happens because the regex pattern for the NETBSD entry triggers
on everything that contains the keyword "NetBSD". Ease the situation
a little bit by restricting this to "Subject:" lines only, like
we do it in the "trivial patches" section already.
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Starting qemu-system-unicore32 without the -kernel parameter results in
an assert() returns false and aborts qemu. This patch replaces it with a
proper error message followed by exit(1).
Signed-off-by: Eduardo Otubo <otubo@redhat.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
user_creatable_add_opts() returns a reference (the other reference is
for the root parent/child link).
Leak introduced in commit a1af255f06.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
This reverts commit b87680427e.
I thought this was a harmless preliminary for XIVE enablement patches
we expect later on. However, due to some subtle interactions between
qemu and SLOF (guest firmware) this breaks some things. Revert it for
now, we'll work out how to fix it when the rest of the XIVE patches
are ready.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If object_property_add_alias() returns an error in realize(), we should
propagate it to the caller and certainly not unref the DRC.
Same thing goes for unrealize(). Since object_property_del() is the last
call, we can even get rid of the intermediate Error *.
And finally, unrealize() should undo all registrations performed by
realize().
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
MIPS patches 2017-07-28
Changes:
* Improve ths MIPS board kernel load error reporting
* Revert unnecessary warning messages
# gpg: Signature made Fri 28 Jul 2017 13:47:52 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170728:
Revert "elf-loader: warn about invalid endianness"
hw/mips: load_elf_strerror to report kernel loading failure
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This reverts c8e1158cf6 "elf-loader: warn about invalid endianness"
as it produces a useless message every time an LE kernel image is
passed via -kernel on a ppc64-pseries machine. The pseries machine
already checks for ELF_LOAD_WRONG_ENDIAN and tries with big_endian=0.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Emulated MIPS boards bail out with a simple "could not load kernel" when
a kernel could not be load, without specifying the underlying reason.
Fix that by calling load_elf_strerror.
At the same time use error_report to report the error instead of
fprintf.
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The SPICE input code is currently detcting 0xe1 0x1d 0x45 as
the PAUSE key make sequence and 0xe1 0x9d 0xc5 as the break
sequence. This is incorrect, because all 6 scancodes together
are the make sequence, and there is no break sequence.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170727174640.30359-1-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
According to the PoP bit positions 0-3 and 8-32 of the format-1 CCW must
contain zeros. Bits 0-3 are already covered by cmd_code validity
checking, and bit 32 is covered by the CCW address checking.
Bits 8-31 correspond to CCW1.flags and CCW1.count. Currently we only
check for the absence of certain flags. Let's fix this.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170725224442.13383-3-pasic@linux.vnet.ibm.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
[CH: tweaked comment]
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
According to the PoP channel command words (CCW) must be doubleword
aligned and 31 bit addressable for format 1 and 24 bit addressable for
format 0 CCWs.
If the channel subsystem encounters a ccw address which does not satisfy
this alignment requirement a program-check condition is recognised.
The situation with 31 bit addressable is a bit more complicated: both the
ORB and a format 1 CCW TIC hold the address of (the rest of) the channel
program, that is the address of the next CCW in a word, and the PoP
mandates that bit 0 of that word shall be zero -- or a program-check
condition is to be recognized -- and does not belong to the field holding
the ccw address.
Since in code the corresponding fields span across the whole word (unlike
in PoP where these are defined as 31 bit wide) we can check this by
applying a mask. The 24 addressable case isn't affecting TIC because the
address is composed of a halfword and a byte portion (no additional zero
bit requirements) and just slightly complicates the ORB case where also
bits 1-7 need to be zero.
The same requirements (especially n-bit addressability) apply to the
ccw addresses generated while chaining.
Let's make our CSS implementation follow the AR more closely.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170727154842.23427-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The processing of the scancodes for PAUSE/BREAK has been broken since
the conversion to qcodes in:
commit 8c10e0baf0
Author: Hervé Poussineau <hpoussin@reactos.org>
Date: Thu Sep 15 22:06:26 2016 +0200
ps2: use QEMU qcodes instead of scancodes
When using a VNC client, with the raw scancode extension, the client
will send a scancode of 0xc6 for both PAUSE and BREAK. There is mistakenly
no entry in the qcode_to_number table for this scancode, so
ps2_keyboard_event() just generates a log message and discards the
scancode
When using a SPICE client, it will also send 0xc6 for BREAK, but
will send 0xe1 0x1d 0x45 0xe1 0x9d 0xc5 for PAUSE. There is no
entry in the qcode_to_number table for the scancode 0xe1 because
it is a special XT keyboard prefix not mapping to any QKeyCode.
Again ps2_keyboard_event() just generates a log message and discards
the scancode. The following 0x1d, 0x45, 0x9d, 0xc5 scancodes get
handled correctly. Rather than trying to handle 3 byte sequences
of scancodes in the PS/2 driver, special case the SPICE input
code so that it captures the 3 byte pause sequence and turns it
into a Pause QKeyCode.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170727113243.23991-1-berrange@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Page-up and Page-down were renamed. Add the names to the keysym list
so we can parse both old and new names. The keypad versions are already
present in the vnc map.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 20170726152918.11995-2-kraxel@redhat.com
x86 bug fix for -rc1
Fix for a bug in "-cpu max" that breaks libvirt usage of
query-cpu-model-expansion.
# gpg: Signature made Wed 26 Jul 2017 19:35:28 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
target/i386: Don't use x86_cpu_load_def() on "max" CPU model
target/i386: Define CPUID_MODEL_ID_SZ macro
target/i386: Use host_vendor_fms() in max_x86_cpu_initfn()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When commit 0bacd8b304 ('i386: Don't set CPUClass::cpu_def on
"max" model') removed the CPUClass::cpu_def field, we kept using
the x86_cpu_load_def() helper directly in max_x86_cpu_initfn(),
emulating the previous behavior when CPUClass::cpu_def was set.
However, x86_cpu_load_def() is intended to help initialization of
CPU models from the builtin_x86_defs table, and does lots of
other steps that are not necessary for "max".
One of the things x86_cpu_load_def() do is to set the properties
listed at tcg_default_props/kvm_default_props. We must not do
that on the "max" CPU model, otherwise under KVM we will
incorrectly report all KVM features as always available, and the
"svm" feature as always unavailable. The latter caused the bug
reported at:
https://bugzilla.redhat.com/show_bug.cgi?id=1467599
("Unable to start domain: the CPU is incompatible with host CPU:
Host CPU does not provide required features: svm")
Replace x86_cpu_load_def() with simple object_property_set*()
calls. In addition to fixing the above bug, this makes the KVM
branch in max_x86_cpu_initfn() very similar to the existing TCG
branch.
For reference, the full list of steps performed by
x86_cpu_load_def() is:
* Setting min-level and min-xlevel. Already done by
max_x86_cpu_initfn().
* Setting family/model/stepping/model-id. Done by the code added
to max_x86_cpu_initfn() in this patch.
* Copying def->features. Wrong because "-cpu max" features need to
be calculated at realize time. This was not a problem in the
current code because host_cpudef.features was all zeroes.
* x86_cpu_apply_props() calls. This causes the bug above, and
shouldn't be done.
* Setting CPUID_EXT_HYPERVISOR. Not needed because it is already
reported by x86_cpu_get_supported_feature_word(), and because
"-cpu max" features need to be calculated at realize time.
* Setting CPU vendor to host CPU vendor if on KVM mode.
Redundant, because max_x86_cpu_initfn() already sets it to the
host CPU vendor.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Message-Id: <20170712162058.10538-5-ehabkost@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
hw/vfio/pci.c:308:29: warning: Use of memory after it is freed
qemu_set_fd_handler(*pfd, NULL, NULL, vdev);
^~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
free the data _after_ using it.
hw/vfio/platform.c:126:29: warning: Use of memory after it is freed
qemu_set_fd_handler(*pfd, NULL, NULL, NULL);
^~~~
Reported-by: Clang Static Analyzer
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Fix leak of the 'encryptopts' string, which was mistakenly
declared const.
Fix leak of QemuOpts entry which should not have been deleted
from the opts array.
Reported by: coverity
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20170714103105.5781-1-berrange@redhat.com
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
The sm501 device uses vmstate_register_ram_global() to register its
memory region for migration. This means it gets a name that is
assumed to be global to the whole system, which in turn means that if
you create two of the device we assert because of the duplication:
qemu-system-ppc -device sm501 -device sm501
RAMBlock "sm501.local" already registered, abort!
Aborted (core dumped)
Changing this to just use memory_region_init_ram()'s automatic
registration of the memory region with a device-local name fixes
this. The downside is that it breaks migration compatibility, but
luckily we only added migration support to this device in the 2.10
release cycle so we haven't released a QEMU version with the broken
implementation.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1500309462-12792-1-git-send-email-peter.maydell@linaro.org
Various changes for the s390x code:
- updates for cpu model handling
- fix compilation with --disable-tcg
- fixes in vfio-ccw and I/O instruction handling
# gpg: Signature made Tue 25 Jul 2017 10:15:37 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <conny@cornelia-huck.de>"
# gpg: aka "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# gpg: aka "Cornelia Huck <cohuck@kernel.org>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20170725:
s390x/css: fix ilen in IO instruction handlers
target/s390x: Add remaining switches to compile with --disable-tcg
target/s390x: Move exception-related functions to a new excp_helper.c file
target/s390x: Rework program_interrupt() and related functions
target/s390x: Move diag helpers to a separate file
target/s390x: Move s390_cpu_dump_state() to helper.c
target/s390x: improve baselining if certain base features are missing
s390x/kvm: better comment regarding zPCI feature availability
target/s390x: introduce (test|set)_be_bit
target/s390x: indicate query subfunction in s390_fill_feat_block
target/s390x: drop BE_BIT()
s390/cpumodel: remove KSS from the default model of z14
vfio/ccw: fix initialization of the Object DeviceState pointer in the common base-device
vfio/ccw: allocate irq info with the right size
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
ppc patch queue 2017-07-25
Last pull request for the 2.10 hard freeze, and correspondingly small.
There are a handful of bugfixes here plus an update for the "pseries"
guest firmware (SLOF).
This is later than ideal for a guest firmware update. However, this
does include a number of fixes in that guest firmware, so I think it's
worth the risk of squeezing this in just before the hard freeze.
# gpg: Signature made Tue 25 Jul 2017 06:43:14 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170725:
pseries: Update SLOF firmware image
spapr: Fix QEMU abort during memory unplug
spapr/htab: fix savevm
spapr_pci: Fix obsolete comment about MSIX encoding in addr/data
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When initiating a program check interruption by calling program_interrupt
the instruction length (ilen) of the current instruction is supplied as
the third parameter.
On s390x all the IO instructions are of instruction format S and their
ilen is 4. The calls to program_interrupt (introduced by commits
7b18aad543 ("s390: Add channel I/O instructions.", 2013-01-24) and
61bf0dcb2e ("s390x/ioinst: Add missing alignment checks for IO
instructions", 2013-06-21)) however use ilen == 2.
This is probably due to a confusion between ilen which specifies the
instruction length in bytes and ILC which does the same but in halfwords.
If kvm_enabled() this does not actually matter, because the ilen
parameter of program_interrupt is effectively unused.
Let's provide the correct ilen to program_interrupt.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Fixes: 7b18aad543 ("s390: Add channel I/O instructions.")
Fixes: 61bf0dcb2e ("s390x/ioinst: Add missing alignment checks for IO instructions")
Reviewed-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170724143452.55534-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
These functions can not be compiled with --disable-tcg. But since we
need the other functions from helper.c in the non-tcg build, we can also
not simply remove helper.c from the non-tcg builds. Thus the problematic
functions have to be moved into a separate new file instead that we
can later omit in the non-tcg builds.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1500886370-14572-5-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
misc_helper.c won't be compiled with --disable-tcg anymore, but we
still need the program_interrupt() function in that case. Move it
to interrupt.c instead, and refactor it to re-use the code from
trigger_pgm_exception() (for TCG) and enter_pgmcheck() (for KVM,
which now got renamed to kvm_s390_program_interrupt() for
clarity).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1500886370-14572-4-git-send-email-thuth@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
translate.c can not be compiled with --disable-tcg, but we need
the s390_cpu_dump_state() in KVM-only builds, too. So let's move
that function to helper.c instead, which will also be compiled
when --disable-tcg has been specified.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1500886370-14572-2-git-send-email-thuth@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
There are certain features that we put into base models, but that are
not relevant for the actual search. The most famous example are
MSA subfunctions that might be disabled on certain real hardware out
there.
While the kvm host model detection will usually detect the correct model
on such machines (as it will in the common case not pass features to check
for into s390_find_cpu_def()), baselining will fall back to a quite old
model just because some MSA subfunctions are missing.
Let's improve that by ignoring lack of these features while performing
the search for a base model.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170720123721.12366-6-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Using ordinary bitmap operations to set/test bits does not work properly
on architectures !s390x. Let's drop (test|set)_bit_inv and introduce
(test|set)_be_bit instead. These functions work on uint8_t array, not on
unsigned longs arrays and are for now only used in the context of
CPU features.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170720123721.12366-4-david@redhat.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
The main changes are:
- fixes in PCI bridges code;
- LUN>255 are allowed not in virtio-scsi.
The full list is:
> pci-scan: Fix pci-bridge-set-mem-base and pci-bridge-set-mem-limit
> pci: Avoid 32-bit prefetchable memory area if possible
> Remove unused functions ishexdigit and $cat-comma
> pci: Translate PCI addresses to host addresses at the end of map-in
> Define 'open' and 'close' words of the /aliases nodes right from the start
> virtio-scsi: Allow LUNs bigger than 255
> paflof: Silence gcc's -Warray-bounds warning for stack pointers
> board_qemu: move code out of fdt-fix-node-phandle
> board_qemu: drop unused values early in fdt-fix-node-phandle
> pci: Improve the pci-var-out debug function
> libhvcall: drop unused KVMPPC_H_REPORT_MC_ERR and KVMPPC_H_NMI_MCE defines
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 0cffce56 (hw/ppc/spapr.c: adding pending_dimm_unplugs to
sPAPRMachineState) introduced a new way to track pending LMBs of DIMM
device that is marked for removal. Since this commit we can hit the
assert in spapr_pending_dimm_unplugs_add() in the following situation:
- DIMM device removal fails as the guest doesn't allow the removal.
- Subsequent attempt to remove the same DIMM would hit the assert
as the corresponding sPAPRDIMMState is still part of the
pending_dimm_unplugs list.
Fix this by removing the assert and conditionally adding the
sPAPRDIMMState to pending_dimm_unplugs list only when it is not
already present.
Fixes: 0cffce56ae
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
[dwg: Tweaked to avoid returning NULL when spapr_pending_dimm_unplugs_add()
does find an existing entry]
Reviewed-by: Daniel Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 3a38429 ("spapr: Add a "no HPT" encoding to HTAB migration stream")
allows to migrate an empty HPT, but doesn't mark correctly the
end of the migration stream.
The end condition (value returned by htab_save_iterate())
should be 1, whereas in 3a38429 it returns 0.
The problem can be reproduced with QEMU monitor command "savevm":
the command never stops and the disk image grows without limit.
Fixes: 3a38429748
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
f1c2dc7c86 "spapr-pci: rework MSI/MSIX" (07/2013) changed MSIX encoding
but forgot to change the comment so this changes it.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of migrating the flash by creating the memory region
with memory_region_init_ram_nomigrate() and then calling
vmstate_register_ram_global(), just use memory_region_init_ram(),
which now handles migration registration automatically.
This is a migration compatibility break for the integratorcp
board, because the RAM region's migration name changes to
include the device path. This is OK because we don't guarantee
migration compatibility for this board.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1500310341-28931-1-git-send-email-peter.maydell@linaro.org
The fsl-imx* boards accidentally forgot to register the ROM memory
regions for migration. This used to require a manual step of calling
vmstate_register_ram(), but following commits
1cfe48c1ce21..b08199c6fbea194 we can use memory_region_init_rom() to
have it do the migration for us.
This is a migration break, but the migration code currently does not
handle the case of having two RAM regions which were not registered
for migration, and so prior to this commit a migration load would
always fail with:
"qemu-system-arm: Length mismatch: 0x4000 in != 0x18000: Invalid argument"
NB: migration appears at this point to be broken for this board
anyway -- it succeeds but the destination hangs; probably some
device in the system does not yet support migration.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1500309775-18361-1-git-send-email-peter.maydell@linaro.org
Test cases 030, 041 and 055 used to sleep for a second after calling
block-job-pause to make sure that the block job had time to actually
get into paused state. We can instead poll its status and use that one
second only as a timeout.
The tests also slept a second for checking that the block jobs don't
make progress while being paused. Half a second is more than enough for
this.
These changes reduce the total time for the three tests by 25 seconds on
my laptop (from 155 seconds to 130).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Commits 0db832f and 6cdbceb introduced the automatic insertion of filter
nodes above the top layer of mirror and commit block jobs. The
assumption made there was that since libvirt doesn't do node-level
management of the block layer yet, it shouldn't be affected by added
nodes.
This is true as far as commands issued by libvirt are concerned. It only
uses BlockBackend names to address nodes, so any operations it performs
still operate on the root of the tree as intended.
However, the assumption breaks down when you consider query commands,
which return data for the wrong node now. These commands also return
information on some child nodes (bs->file and/or bs->backing), which
libvirt does make use of, and which refer to the wrong nodes, too.
One of the consequences is that oVirt gets wrong information about the
image size and stops the VM in response as long as a mirror or commit
job is running:
https://bugzilla.redhat.com/show_bug.cgi?id=1470634
This patch fixes the problem by hiding the implicit nodes created
automatically by the mirror and commit block jobs in the output of
query-block and BlockBackend-based query-blockstats as long as the user
doesn't indicate that they are aware of those nodes by providing a node
name for them in the QMP command to start the block job.
The node-based commands query-named-block-nodes and query-blockstats
with query-nodes=true still show all nodes, including implicit ones.
This ensures that users that are capable of node-level management can
still access the full information; users that only know BlockBackends
won't use these commands.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Peter Krempa <pkrempa@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
We used MAX() instead of the intended MIN() when computing how many
sectors to view in the current loop iteration of qcow2_measure(),
and passed in a value of INT_MAX sectors instead of our more usual
limit of BDRV_REQUEST_MAX_SECTORS (the latter avoids 32-bit overflow
on conversion to bytes). For small files, the bug is harmless:
bdrv_get_block_status_above() clamps its *pnum answer to the BDS
size, regardless of any insanely larger input request. However, for
any file at least 2T in size, we can very easily end up going into an
infinite loop (the maximum of 0x100000000 sectors and INT_MAX is a
64-bit quantity, which becomes 0 when assigned to int; once nb_sectors
is 0, we never make progress).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We've been documenting the value in bytes since its introduction
in commit b9a9b3a4 (v1.3), where it was actually reported in bytes.
Commit e4654d2 (v2.0) then removed things from block/qapi.c, in
preparation for a rewrite to a list of dirty sectors in the next
commit 21b5683 in block.c, but the new code mistakenly started
reporting in sectors.
Fixes: https://bugzilla.redhat.com/1441460
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
A run of './check -qcow2 -g quick' on my machine produced only
two tests that took longer than 5 seconds; 178 took 18, and
189 took 7. Remove them from the quick group.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
QAPI patches for 2017-07-18
# gpg: Signature made Mon 24 Jul 2017 12:40:56 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2017-07-18-v2:
migration: Use JSON null instead of "" to reset parameter to default
migration: Unshare MigrationParameters struct for now
migration: Add TODO comments on duplication of QAPI_CLONE()
migration: Clean up around tls_creds, tls_hostname
hmp: Clean up and simplify hmp_migrate_set_parameter()
block: Use JSON null instead of "" to disable backing file
tests/test-qobject-input-visitor: Drop redundant test
qapi: Introduce a first class 'null' type
qapi: Use QNull for a more regular visit_type_null()
qapi: Separate type QNull from QObject
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Clang 3.9 passes the CONFIG_AVX2_OPT configure test. However, the
supplied <cpuid.h> does not contain the bit_AVX2 define that we use
when detecting whether the routine can be enabled.
Introduce a qemu-specific header that uses the compiler's definition
of __cpuid et al, but supplies any missing bit_* definitions needed.
This avoids introducing any extra ifdefs to util/bufferiszero.c, and
allows quite a few to be removed from tcg/i386/tcg-target.inc.c.
Signed-off-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20170719044018.18063-1-rth@twiddle.net
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
migrate-set-parameters sets migration parameters according to is
arguments like this:
* Present means "set the parameter to this value"
* Absent means "leave the parameter unchanged"
* Except for parameters tls_creds and tls_hostname, "" means "reset
the parameter to its default value
The first two are perfectly normal: presence of the parameter makes
the command do something.
The third one overloads the parameter with a second meaning. The
overloading is *implicit*, i.e. it's not visible in the types. Works
here, because "" is neither a valid TLS credentials ID, nor a valid
host name.
Pressing argument values the schema accepts, but are semantically
invalid, into service to mean "reset to default" is not general, as
suitable invalid values need not exist. I also find it ugly.
To clean this up, we could add a separate flag argument to ask for
"reset to default", or add a distinct value to @tls_creds and
@tls_hostname. This commit implements the latter: add JSON null to
the values of @tls_creds and @tls_hostname, deprecate "".
Because we're so close to the 2.10 freeze, implement it in the
stupidest way possible: have qmp_migrate_set_parameters() rewrite null
to "" before anything else can see the null. The proper way to do it
would be rewriting "" to null, but that requires fixing up code to
work with null. Add TODO comments for that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Commit de63ab6 "migrate: Share common MigrationParameters struct"
reused MigrationParameters for the arguments of
migrate-set-parameters, with the following rationale:
It is rather verbose, and slightly error-prone, to repeat
the same set of parameters for input (migrate-set-parameters)
as for output (query-migrate-parameters), where the only
difference is whether the members are optional. We can just
document that the optional members will always be present
on output, and then share a common struct between both
commands. The next patch can then reduce the amount of
code needed on input.
I need to unshare them to correct a design flaw in a stupid, but
minimally invasive way, in the next commit. We can restore the
sharing when we redo that patch in a less stupid way. Add a suitable
TODO comment.
Note that I revert only the sharing part of commit de63ab6, not the
part that made the members of query-migrate-parameters' result
optional. The schema (and thus introspection) remains inaccurate for
query-migrate-parameters. If we decide not to restore the sharing, we
should revert that part, too.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
qmp_query_migrate_parameters() and qmp_migrate_set_parameters()
effectively duplicate QAPI_CLONE() inline. Add suitable TODO
comments.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Optional MigrationParameters members tls_creds and tls_hostname can't
actually be absent outside qmp_migrate_set_parameters() since commit
4af245d (v2.9.0).
Note that commit 4af245d reverted the part of commit de63ab6 (v2.8.0)
that made tls_creds and tls_hostname absent instead of "" in the value
of query-migrate-parameters, even though commit de63ab6 called that a
mistake. What a mess.
Drop the redundant tests for presence, and update documentation.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
The bulk of hmp_migrate_set_parameter()'s code sets one member of
MigrationParameters according to the command's arguments. It uses a
string visitor for integer and boolean members, but not for string and
size members. It calls visit_type_bool() right away, but delays
visit_type_int() some. The delaying requires a flag variable and a
bit of trickery: we set all integer members instead of just the one we
want, and rely on the has_FOOs to mask the unwanted ones.
Clean this up as follows. Don't delay calling visit_type_int(). Use
the string visitor for strings, too. This involves extra allocations
and cleanup, but doing them is simpler and cleaner than avoiding them.
Sadly, using the string visitor for sizes isn't possible, because it
defaults to Bytes rather than Mebibytes. Add a comment explaining
that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
BlockdevRef is an alternate of BlockdevOptions (inline definition) and
str (reference to an existing block device by name). BlockdevRef
value "" is special: "no block device should be referenced." It's
actually interpreted that way in just one place: optional member
@backing of COW formats. Semantics:
* Present means "use this block device" as backing storage
* Absent means "default to the one stored in the image"
* Except "" means "don't use backing storage at all"
The first two are perfectly normal: when the parameter is absent, it
defaults to an implied value, but the value's meaning is the same.
The third one overloads the parameter with a second meaning. The
overloading is *implicit*, i.e. it's not visible in the types. Works
here, because "" is not a value block device ID.
Pressing argument values the schema accepts, but are semantically
invalid, into service to mean "do something else entirely" is not
general, as suitable invalid values need not exist. I also find it
ugly.
To clean this up, we could add a separate flag argument to suppress
@backing, or add a distinct value to @backing. This commit implements
the latter: add JSON null to the values of @backing, deprecate "".
Because we're so close to the 2.10 freeze, implement it in the
stupidest way possible: have qmp_blockdev_add() rewrite null to ""
before anything else can see the null. Works, because BlockdevRef
occurs only within arguments of blockdev-add. The proper way to do it
would be rewriting "" to null, preferably in a cleaner way, but that
requires fixing up code to work with null. Add a TODO comment for
that.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Acked-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
test_visitor_in_alternate() tests UserDefAlternate with inadmissible
input. test_visitor_in_fail_alternate() does basically the same.
Drop the former, keep the latter.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
I expect the 'null' type to be useful mostly for members of alternate
types.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Make visit_type_null() take an @obj argument like its buddies. This
helps keep the next commit simple.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Under certain circumstances normal xen-mapcache functioning may be broken
by guest's actions. This may lead to either QEMU performing exit() due to
a caught bad pointer (and with QEMU process gone the guest domain simply
appears hung afterwards) or actual use of the incorrect pointer inside
QEMU address space -- a write to unmapped memory is possible. The bug is
hard to reproduce on a i440 machine as multiple DMA sources are required
(though it's possible in theory, using multiple emulated devices), but can
be reproduced somewhat easily on a Q35 machine using an emulated AHCI
controller -- each NCQ queue command slot may be used as an independent
DMA source ex. using READ FPDMA QUEUED command, so a single storage
device on the AHCI controller port will be enough to produce multiple DMAs
(up to 32). The detailed description of the issue follows.
Xen-mapcache provides an ability to map parts of a guest memory into
QEMU's own address space to work with.
There are two types of cache lookups:
- translating a guest physical address into a pointer in QEMU's address
space, mapping a part of guest domain memory if necessary (while trying
to reduce a number of such (re)mappings to a minimum)
- translating a QEMU's pointer back to its physical address in guest RAM
These lookups are managed via two linked-lists of structures.
MapCacheEntry is used for forward cache lookups, while MapCacheRev -- for
reverse lookups.
Every guest physical address is broken down into 2 parts:
address_index = phys_addr >> MCACHE_BUCKET_SHIFT;
address_offset = phys_addr & (MCACHE_BUCKET_SIZE - 1);
MCACHE_BUCKET_SHIFT depends on a system (32/64) and is equal to 20 for
a 64-bit system (which assumed for the further description). Basically,
this means that we deal with 1 MB chunks and offsets within those 1 MB
chunks. All mappings are created with 1MB-granularity, i.e. 1MB/2MB/3MB
etc. Most DMA transfers typically are less than 1MB, however, if the
transfer crosses any 1MB border(s) - than a nearest larger mapping size
will be used, so ex. a 512-byte DMA transfer with the start address
700FFF80h will actually require a 2MB range.
Current implementation assumes that MapCacheEntries are unique for a given
address_index and size pair and that a single MapCacheEntry may be reused
by multiple requests -- in this case the 'lock' field will be larger than
1. On other hand, each requested guest physical address (with 'lock' flag)
is described by each own MapCacheRev. So there may be multiple MapCacheRev
entries corresponding to a single MapCacheEntry. The xen-mapcache code
uses MapCacheRev entries to retrieve the address_index & size pair which
in turn used to find a related MapCacheEntry. The 'lock' field within
a MapCacheEntry structure is actually a reference counter which shows
a number of corresponding MapCacheRev entries.
The bug lies in ability for the guest to indirectly manipulate with the
xen-mapcache MapCacheEntries list via a special sequence of DMA
operations, typically for storage devices. In order to trigger the bug,
guest needs to issue DMA operations in specific order and timing.
Although xen-mapcache is protected by the mutex lock -- this doesn't help
in this case, as the bug is not due to a race condition.
Suppose we have 3 DMA transfers, namely A, B and C, where
- transfer A crosses 1MB border and thus uses a 2MB mapping
- transfers B and C are normal transfers within 1MB range
- and all 3 transfers belong to the same address_index
In this case, if all these transfers are to be executed one-by-one
(without overlaps), no special treatment necessary -- each transfer's
mapping lock will be set and then cleared on unmap before starting
the next transfer.
The situation changes when DMA transfers overlap in time, ex. like this:
|===== transfer A (2MB) =====|
|===== transfer B (1MB) =====|
|===== transfer C (1MB) =====|
time --->
In this situation the following sequence of actions happens:
1. transfer A creates a mapping to 2MB area (lock=1)
2. transfer B (1MB) tries to find available mapping but cannot find one
because transfer A is still in progress, and it has 2MB size + non-zero
lock. So transfer B creates another mapping -- same address_index,
but 1MB size.
3. transfer A completes, making 1st mapping entry available by setting its
lock to 0
4. transfer C starts and tries to find available mapping entry and sees
that 1st entry has lock=0, so it uses this entry but remaps the mapping
to a 1MB size
5. transfer B completes and by this time
- there are two locked entries in the MapCacheEntry list with the SAME
values for both address_index and size
- the entry for transfer B actually resides farther in list while
transfer C's entry is first
6. xen_ram_addr_from_mapcache() for transfer B gets correct address_index
and size pair from corresponding MapCacheRev entry, but then it starts
looking for MapCacheEntry with these values and finds the first entry
-- which belongs to transfer C.
At this point there may be following possible (bad) consequences:
1. xen_ram_addr_from_mapcache() will use a wrong entry->vaddr_base value
in this statement:
raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
((unsigned long) ptr - (unsigned long) entry->vaddr_base);
resulting in an incorrent raddr value returned from the function. The
(ptr - entry->vaddr_base) expression may produce both positive and negative
numbers and its actual value may differ greatly as there are many
map/unmap operations take place. If the value will be beyond guest RAM
limits then a "Bad RAM offset" error will be triggered and logged,
followed by exit() in QEMU.
2. If raddr value won't exceed guest RAM boundaries, the same sequence
of actions will be performed for xen_invalidate_map_cache_entry() on DMA
unmap, resulting in a wrong MapCacheEntry being unmapped while DMA
operation which uses it is still active. The above example must
be extended by one more DMA transfer in order to allow unmapping as the
first mapping in the list is sort of resident.
The patch modifies the behavior in which MapCacheEntry's are added to the
list, avoiding duplicates.
Signed-off-by: Alexey Gerasimenko <x1917x@gmail.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Solaris 9 was released in 2002, its successor Solaris 10 was
released in 2005, and Solaris 9 was end-of-lifed in 2014.
Nobody has stepped forward to express interest in supporting
Solaris of any flavour, so removing support for the ancient
versions seems uncontroversial.
In particular, this allows us to remove a use of 'uname'
in configure that won't work if you're cross-compiling.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1499955697-28045-1-git-send-email-peter.maydell@linaro.org
For a very long time we have used 'uname -s' as our fallback if
we don't identify the target OS using a compiler #define. This
obviously doesn't work for cross-compilation, and we've had
a comment suggesting we fix this in configure for a long time.
Since we now have an exhaustive list of which OSes we can run
on (thanks to commit 898be3e041 making an unrecognized OS
be a fatal error), we know which ones we're missing.
Add check_define tests for the remaining OSes we support. The
defines checked are based on ones we already use in the codebase for
identifying the host OS (with the exception of GNU/kFreeBSD).
We can now set bogus_os immediately rather than doing it later.
We leave the comment about uname being bad untouched, since
there is still a use of it for the fallback for unrecognized
host CPU type.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1499958932-23839-1-git-send-email-peter.maydell@linaro.org
On OpenBSD the compiler warns:
bsd-user/main.c:622:21: warning: variable 'sig' set but not used [-Wunused-but-set-variable]
This is because a lot of the signal delivery code is #if-0'd
out as unused. Reshuffle #ifdefs a bit to silence the warning.
(We make the minimum change here rather than removing all the
bsd-user patchset which should make this all work correctly and
there's no point giving them an awkward rebase task.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1500395194-21455-5-git-send-email-peter.maydell@linaro.org
On OpenBSD the compiler complains:
bsd-user/bsdload.c:54:17: warning: variable 'id_change' set but not used [-Wunused-but-set-variable]
This is dead code that was originally copied from linux-user.
We fixed this in linux-user in commit 331c23b5ca in 2011;
delete the useless code from bsd-user too.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-id: 1500395194-21455-4-git-send-email-peter.maydell@linaro.org
MIPS patches 2017-07-21
Changes:
* Add Enhanced Virtual Addressing (EVA) support
# gpg: Signature made Fri 21 Jul 2017 03:25:15 BST
# gpg: using RSA key 0x2238EB86D5F797C2
# gpg: Good signature from "Yongbok Kim <yongbok.kim@imgtec.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 8600 4CF5 3415 A5D9 4CFA 2B5C 2238 EB86 D5F7 97C2
* remotes/yongbok/tags/mips-20170721:
target/mips: Enable CP0_EBase.WG on MIPS64 CPUs
target/mips: Add EVA support to P5600
target/mips: Implement segmentation control
target/mips: Add segmentation control registers
target/mips: Add an MMU mode for ERL
target/mips: Abstract mmu_idx from hflags
target/mips: Check memory permissions with mem_idx
target/mips: Decode microMIPS EVA load & store instructions
target/mips: Decode MIPS32 EVA load & store instructions
target/mips: Prepare loads/stores for EVA
target/mips: Add CP0_Ebase.WG (write gate) support
target/mips: Weaken TLB flush on UX,SX,KX,ASID changes
target/mips: Fix TLBWI shadow flush for EHINV,XI,RI
target/mips: Fix MIPS64 MFC0 UserLocal on BE host
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Final CI updates for soft-freeze
Tweaks from Paolo for J=x Travis compiles
Bunch of updated cross-compile targets from Philippe
Additional debug tools in travis image from Me
# gpg: Signature made Tue 18 Jul 2017 11:00:26 BST
# gpg: using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-ci-updates-for-softfreeze-180717-2: (32 commits)
docker: install clang since Shippable setup_ve() verify it is available
docker: warn users to use newer debian8/debian9 base image
docker: add debian Ports base image
shippable: add win32/64 targets
docker: add MXE (M cross environment) base image for MinGW-w64
shippable: add mips64el targets
docker: add debian/mips64el image
shippable: use debian/mips[eb] targets
docker: add debian/mips[eb] images
shippable: add powerpc target
docker: add debian/powerpc based on Jessie
docker: add 'apt-fake' script which generate fake debian packages
docker: add qemu:debian-jessie based on outdated jessie release
shippable: add x86_64 targets
shippable: add ppc64el targets
shippable: add armel targets
docker: enable nettle to extend code coverage on arm64
docker: enable gcrypt to extend code coverage on amd64
docker: enable netmap to extend code coverage on amd64
docker: enable virgl to extend code coverage on amd64
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix various warnings about set-but-not-used variables on OpenBSD:
bsd-user/elfload.c:1158:15: warning: variable 'mapped_addr' set but not used [-Wunused-but-set-variable]
bsd-user/elfload.c:1165:9: warning: variable 'status' set but not used [-Wunused-but-set-variable]
bsd-user/elfload.c:1168:15: warning: variable 'elf_stack' set but not used [-Wunused-but-set-variable]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1500395194-21455-3-git-send-email-peter.maydell@linaro.org
On NetBSD, where tolower() and toupper() are implemented using an
array lookup, the compiler warns if you pass a plain 'char'
to these functions:
gdbstub.c:914:13: warning: array subscript has type 'char'
This reflects the fact that toupper() and tolower() give
undefined behaviour if they are passed a value that isn't
a valid 'unsigned char' or EOF.
We have qemu_tolower() and qemu_toupper() to avoid this problem;
use them.
(The use in scsi-generic.c does not trigger the warning because
it passes a uint8_t; we switch it anyway, for consistency.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> for the s390 part.
Acked-by: David Gibson <david@gibson.dropbear.id.au>
Message-id: 1500568290-7966-1-git-send-email-peter.maydell@linaro.org
On NetBSD the compiler warns:
util/oslib-posix.c: In function 'sigaction_invoke':
util/oslib-posix.c:589:5: warning: missing braces around initializer [-Wmissing-braces]
siginfo_t si = { 0 };
^
util/oslib-posix.c:589:5: warning: (near initialization for 'si.si_pad') [-Wmissing-braces]
because on this platform siginfo_t is defined as
typedef union siginfo {
char si_pad[128]; /* Total size; for future expansion */
struct _ksiginfo _info;
} siginfo_t;
Avoid this warning by initializing the struct with {} instead;
this is a GCC extension but we use it all over the codebase already.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1500568341-8389-1-git-send-email-peter.maydell@linaro.org
Add the Enhanced Virtual Addressing (EVA) feature to the P5600 core
configuration, along with the related Segmentation Control (SC) feature
and writable CP0_EBase.WG bit.
This allows it to run Malta EVA kernels.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement the optional segmentation control feature in the virtual to
physical address translation code.
The fixed legacy segment and xkphys handling is replaced with a dynamic
layout based on the segmentation control registers (which should be set
up even when the feature is not exposed to the guest).
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The optional segmentation control registers CP0_SegCtl0, CP0_SegCtl1 &
CP0_SegCtl2 control the behaviour and required privilege of the legacy
virtual memory segments.
Add them to the CP0 interface so they can be read and written when
CP0_Config3.SC=1, and initialise them to describe the standard legacy
layout so they can be used in future patches regardless of whether they
are exposed to the guest.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The segmentation control feature allows a legacy memory segment to
become unmapped uncached at error level (according to CP0_Status.ERL),
and in fact the user segment is already treated in this way by QEMU.
Add a new MMU mode for this state so that QEMU's mappings don't persist
between ERL=0 and ERL=1.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
The MIPS mmu_idx is sometimes calculated from hflags without an env
pointer available as cpu_mmu_index() requires.
Create a common hflags_mmu_index() for the purpose of this calculation
which can operate on any hflags, not just with an env pointer, and
update cpu_mmu_index() itself and gen_intermediate_code() to use it.
Also update debug_post_eret() and helper_mtc0_status() to log the MMU
mode with the status change (SM, UM, or nothing for kernel mode) based
on cpu_mmu_index() rather than directly testing hflags.
This will also allow the logic to be more easily updated when a new MMU
mode is added.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
When performing virtual to physical address translation, check the
required privilege level based on the mem_idx rather than the mode in
the hflags. This will allow EVA loads & stores to operate safely only on
user memory from kernel mode.
For the cases where the mmu_idx doesn't need to be overridden
(mips_cpu_get_phys_page_debug() and cpu_mips_translate_address()), we
calculate the required mmu_idx using cpu_mmu_index(). Note that this
only tests the MIPS_HFLAG_KSU bits rather than MIPS_HFLAG_MODE, so we
don't test the debug mode hflag MIPS_HFLAG_DM any longer. This should be
fine as get_physical_address() only compares against MIPS_HFLAG_UM and
MIPS_HFLAG_SM, neither of which should get set by compute_hflags() when
MIPS_HFLAG_DM is set.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement decoding of microMIPS EVA load and store instruction groups in
the POOL31C pool. These use the same gen_ld(), gen_st(), gen_st_cond()
helpers as the MIPS32 decoding, passing the equivalent MIPS32 opcodes as
opc.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Implement decoding of MIPS32 EVA loads and stores. These access the user
address space from kernel mode when implemented, so for each instruction
we need to check that EVA is available from Config5.EVA & check for
sufficient COP0 privilege (with the new check_eva()), and then override
the mem_idx used for the operation.
Unfortunately some Loongson 2E instructions use overlapping encodings,
so we must be careful not to prevent those from being decoded when EVA
is absent.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
EVA load and store instructions access the user mode address map, so
they need to use mem_idx of MIPS_HFLAG_UM. Update the various utility
functions to allow mem_idx to be more easily overridden from the
decoding logic.
Specifically we add a mem_idx argument to the op_ld/st_* helpers used
for atomics, and a mem_idx local variable to gen_ld(), gen_st(), and
gen_st_cond().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Add support for the CP0_EBase.WG bit, which allows upper bits to be
written (bits 31:30 on MIPS32, or bits 63:30 on MIPS64), along with the
CP0_Config5.CV bit to control whether the exception vector for Cache
Error exceptions is forced into KSeg1.
This is necessary on MIPS32 to support Segmentation Control and Enhanced
Virtual Addressing (EVA) extensions (where KSeg1 addresses may not
represent an unmapped uncached segment).
It is also useful on MIPS64 to allow the exception base to reside in
XKPhys, and possibly out of range of KSEG0 and KSEG1.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
minor changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
There is no need to invalidate any shadow TLB entries when the ASID
changes or when access to one of the 64-bit segments has been disabled,
since doing so doesn't reveal to software whether any TLB entries have
been evicted into the shadow half of the TLB.
Therefore weaken the tlb flushes in these cases to only flush the QEMU
TLB.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Writing specific TLB entries with TLBWI flushes shadow TLB entries
unless an existing entry is having its access permissions upgraded. This
is necessary as software would from then on expect the previous mapping
in that entry to no longer be in effect (even if QEMU has quietly
evicted it to the shadow TLB on a TLBWR).
However it won't do this if only EHINV, XI, or RI bits have been set,
even if that results in a reduction of permissions, so add the necessary
checks to invoke the flush when these bits are set.
Fixes: 2fb58b7374 ("target-mips: add RI and XI fields to TLB entry")
Fixes: 9456c2fbcd ("target-mips: add TLBINV support")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Yongbok Kim <yongbok.kim@imgtec.com>
[yongbok.kim@imgtec.com:
cosmetic changes]
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
Using MFC0 to read CP0_UserLocal uses tcg_gen_ld32s_tl, however
CP0_UserLocal is a target_ulong. On a big endian host with a MIPS64
target this reads and sign extends the more significant half of the
64-bit register.
Fix this by using ld_tl to load the whole target_ulong and ext32s_tl to
sign extend it, as done for various other target_ulong COP0 registers.
Fixes: d279279e2b ("target-mips: implement UserLocal Register")
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Cc: Yongbok Kim <yongbok.kim@imgtec.com>
Cc: Aurelien Jarno <aurelien@aurel32.net>
Cc: Petar Jovanovic <petar.jovanovic@imgtec.com>
Reviewed-by: Yongbok Kim <yongbok.kim@imgtec.com>
Signed-off-by: Yongbok Kim <yongbok.kim@imgtec.com>
In various places in our test makefiles and scripts we use the
shell $RANDOM to create a random number. This is a bash
specific extension, and doesn't work on other shells.
With dash the shell doesn't complain, it just effectively
always evaluates $RANDOM to 0:
echo $((RANDOM + 32768)) => 32768
However, on NetBSD the shell will complain:
"-sh: arith: syntax error: "RANDOM + 32768"
which means that "make check" fails.
Switch to using "${RANDOM:-0}" instead of $RANDOM,
which will portably either give us a random number or zero.
This means that on non-bash shells we don't get such
good test coverage via the MALLOC_PERTURB_ setting, but
we were already in that situation for non-bash shells.
Our only other uses of $RANDOM (in tests/qemu-iotests/check
and tests/qemu-iotests/162) are in shell scripts which use
a #!/bin/bash line so they are always run under bash.
Suggested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Kamil Rytarowski <n54@gmx.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Message-id: 1500029117-6387-1-git-send-email-peter.maydell@linaro.org
Don't try to build the ivshmem-server and ivshmem-client tools unless
CONFIG_IVSHMEM is set.
This fixes in passing a build bug on NetBSD, which fails to build the
ivshmem tools because they use shm_open() and on NetBSD that requires
linking against -lrt.
Signed-off-by: Kamil Rytarowski <n54@gmx.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1500021225-4118-4-git-send-email-peter.maydell@linaro.org
[PMM: moved some code into earlier patches; minor bugfixes;
added commit message]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The current CONFIG_IVSHMEM is confusing, because it looks like it's a
flag for "do we have ivshmem support?", but actually it's a flag for
"is the ivshmem PCI device being compiled?" (and implicitly "do we
have ivshmem support?" is tested with CONFIG_EVENTFD).
Rename it to CONFIG_IVSHMEM_DEVICE to clear this confusion up;
shortly we will add a new CONFIG_IVSHMEM which really does indicate
whether the host can support ivshmem.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-id: 1500021225-4118-2-git-send-email-peter.maydell@linaro.org
Queued tcg and tcg code gen related cleanups
# gpg: Signature made Thu 20 Jul 2017 00:32:00 BST
# gpg: using RSA key 0xAD1270CC4DD0279B
# gpg: Good signature from "Richard Henderson <rth7680@gmail.com>"
# gpg: aka "Richard Henderson <rth@redhat.com>"
# gpg: aka "Richard Henderson <rth@twiddle.net>"
# Primary key fingerprint: 9CB1 8DDA F8E8 49AD 2AFC 16A4 AD12 70CC 4DD0 279B
* remotes/rth/tags/pull-tcg-20170719:
tcg: Pass generic CPUState to gen_intermediate_code()
tcg/tci: enable bswap16_i64
target/alpha: optimize gen_cvtlq() using deposit op
target/sparc: optimize gen_op_mulscc() using deposit op
target/sparc: optimize various functions using extract op
target/ppc: optimize various functions using extract op
target/m68k: optimize bcd_flags() using extract op
target/arm: optimize aarch32 rev16
target/arm: Optimize aarch64 rev16
coccinelle: add a script to optimize tcg op using tcg_gen_extract()
coccinelle: ignore ASTs pre-parsed cached C files
tcg: Expand glue macros before stringifying helper names
util/cacheinfo: Add missing include for ppc linux
tcg/mips: reserve a register for the guest_base.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
gcc 7 is pickier about our sources:
hw/usb/bus.c: In function ‘usb_port_location’:
hw/usb/bus.c:410:66: error: ‘%d’ directive output may be truncated writing between 1 and 11 bytes into a region of size between 0 and 15 [-Werror=format-truncation=]
snprintf(downstream->path, sizeof(downstream->path), "%s.%d",
^~
hw/usb/bus.c:410:9: note: ‘snprintf’ output between 3 and 28 bytes into a destination of size 16
snprintf(downstream->path, sizeof(downstream->path), "%s.%d",
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
upstream->path, portnr);
~~~~~~~~~~~~~~~~~~~~~~~
But we know that there are at most 5 levels of USB hubs, with at
most two digits per level; that plus the separating dots means we
use at most 15 bytes (including trailing NUL) of our 16-byte field.
Adding an assertion to show gcc that we checked for truncation is
enough to shut up the false-positive warning.
Inspired by an idea by Dr. David Alan Gilbert <dgilbert@redhat.com>.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170717151334.17954-1-eblake@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Add a .editorconfig file for qemu. Specifies the indent and tab style
for various files (C code and Makefiles for starters). Most popular
editors support this either natively or via plugin.
Check http://editorconfig.org/ for details.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170717101547.22295-1-kraxel@redhat.com
Done with the Coccinelle semantic patch
scripts/coccinelle/tcg_gen_extract.cocci.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
It is much shorter to reverse all 4 half-words in parallel
than extract, reverse, and deposit each in turn.
Suggested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This include was forgotten when splitting cacheinfo.c out of
tcg/ppc/tcg-target.inc.c (see commit b255b2c8).
For a Centos7 host, the include path
<signal.h>
<bits/sigcontext.h>
<asm/sigcontext.h>
<asm/elf.h>
<asm/auxvec.h>
implicitly pulls in the desired AT_* defines.
Not so for Debian Jessie.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-Id: <20170711015524.22936-1-f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
previous commit change image mips little -> big endian
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
By itself this doesn't add much to our coverage. However later patches
will extend this image to include more bleeding edge libraries which
are not yet widely available in distros.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[AJB: extend commit msg]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
We'll also want to support some older Debian combinations for
architectures that didn't make the Debian 9 cut.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
[AJB: extend commit msg]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
When a test fails/hangs you don't want the hassle of getting the debug
tools installed. Lets install them on our image by default so we can
debug when we need to.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Although the upstream Travis images don't need this library our
"travis-lite" scripts are written in python. This allows us to do:
make docker-travis@travis J=10
and approximate a travis run on their default image.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Because global environment variables can be overridden when .travis.yml
is processed by the docker-travis target, the effect of this patch is
that docker-travis now obeys the "J=n" option.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
This is useful so that we can do builds at higher than -j3 when running
travis.py locally.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
2017-07-18 09:39:19 +01:00
968 changed files with 34653 additions and 34015 deletions
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.