Coverity identifies that at the top of the while(1) loop
in curses_refresh() the variable nextchr is always ERR,
and so the else case of the first if() is dead code.
Remove this dead code, and narrow the scope of the
nextchr variable to the place where it's used.
(This confused logic has been present since the curses
code was added to QEMU in 2008.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1470925407-23850-3-git-send-email-peter.maydell@linaro.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Coverity spots that there is no bounds check before we
access the curses2qemu[] array. Add one, bringing this
code path into line with the one that looks up entries
in curses2keysym[].
In theory getch() shouldn't return out of range keycodes,
but it's better not to assume this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1470925407-23850-2-git-send-email-peter.maydell@linaro.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Several error messages print out the raw enum value, which
is less than helpful to users, as these values are not
documented, nor stable across QEMU releases. Switch to use
the enum string instead.
The nettle impl also had two typos where it mistakenly
said "algorithm" instead of "mode", and actually reported
the algorithm value too.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
gnutls commit 846753877d renamed LIBGNUTLS_VERSION_NUMBER to GNUTLS_VERSION_NUMBER.
If using gnutls before that verion, we'll get the below warning:
crypto/tlscredsx509.c:618:5: warning: "GNUTLS_VERSION_NUMBER" is not defined
Because gnutls 3.x still defines LIBGNUTLS_VERSION_NUMBER for back compat, Let's
use LIBGNUTLS_VERSION_NUMBER instead of GNUTLS_VERSION_NUMBER to fix building
complaint.
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The XTS cipher mode needs to be used with a cipher which has
a block size of 16 bytes. If a mis-matching block size is used,
the code will either corrupt memory beyond the IV array, or
not fully encrypt/decrypt the IV.
This fixes a memory corruption crash when attempting to use
cast5-128 with xts, since the former has an 8 byte block size.
A test case is added to ensure the cipher creation fails with
such an invalid combination.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
virtio,vhost,pc: fixes and updates
balloon fixes wrt migration
virtio-vsock device support
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 09 Sep 2016 22:36:13 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream:
vhost-vsock: add virtio sockets device
tests/acpi: speedup acpi tests
virtio-pci: minor refactoring
vhost: don't set vring call if no vector
virtio-pci: error out when both legacy and modern modes are disabled
virtio-balloon: fix stats vq migration
virtio: add virtqueue_rewind()
virtio-balloon: discard virtqueue element on reset
virtio: zero vq->inuse in virtio_reset()
virtio-pci: reduce modern_mem_bar size
target-i386: present virtual L3 cache info for vcpus
pc: Add 2.8 machine
virtio-pci: use size from correct structure
virtio: Tell the user what went wrong when event_notifier_init failed
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Implement the new virtio sockets device for host<->guest communication
using the Sockets API. Most of the work is done in a vhost kernel
driver so that virtio-vsock can hook into the AF_VSOCK address family.
The QEMU vhost-vsock device handles configuration and live migration
while the rx/tx happens in the vhost_vsock.ko Linux kernel driver.
The vsock device must be given a CID (host-wide unique address):
# qemu -device vhost-vsock-pci,id=vhost-vsock-pci0,guest-cid=3 ...
For more information see:
http://qemu-project.org/Features/VirtioVsock
[Endianness fixes and virtio-ccw support by Claudio Imbrenda
<imbrenda@linux.vnet.ibm.com>]
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
[mst: rebase to master]
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Use kvm acceleration if available.
Disable kernel-irqchip and use qemu64 cpu
for both kvm and tcg cases.
Using kvm acceleration saves about a second
and disabling kernel-irqchip has no visible
performance impact.
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
!legacy && !modern is shorter than !(legacy || modern).
I also perfer this (less ()s) as a matter of taste.
Cc: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
We used to set vring call fd unconditionally even if guest driver does
not use MSIX for this vritqueue at all. This will cause lots of
unnecessary userspace access and other checks for drivers does not use
interrupt at all (e.g virtio-net pmd). So check and clean vring call
fd if guest does not use any vector for this virtqueue at
all.
Perf diffs (on rx) shows lots of cpus wasted on vhost_signal() were saved:
#
28.12% -27.82% [vhost] [k] vhost_signal
14.44% -1.69% [kernel.vmlinux] [k] copy_user_generic_string
7.05% +1.53% [kernel.vmlinux] [k] __free_page_frag
6.51% +5.53% [vhost] [k] vhost_get_vq_desc
...
Pktgen tests shows 15.8% improvement on rx pps and 6.5% on tx pps.
Before: RX 2.08Mpps TX 1.35Mpps
After: RX 2.41Mpps TX 1.44Mpps
Signed-off-by: Jason Wang <jasowang@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Without presuming if we got there because of a user mistake or some
more subtle bug in the tooling, it really does not make sense to
implement a non-functional device.
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The statistics virtqueue is not migrated properly because virtio-balloon
does not include s->stats_vq_elem in the migration stream.
After migration the statistics virtqueue hangs because the host never
completes the last element (s->stats_vq_elem is NULL on the destination
QEMU). Therefore the guest never submits new elements and the virtqueue
is hung.
Instead of changing the migration stream format in an incompatible way,
detect the migration case and rewind the virtqueue so the last element
can be completed.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Suggested-by: Roman Kagan <rkagan@virtuozzo.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
virtqueue_discard() requires a VirtQueueElement but virtio-balloon does
not migrate its in-use element. Introduce a new function that is
similar to virtqueue_discard() but doesn't require a VirtQueueElement.
This will allow virtio-balloon to access element again after migration
with the usual proviso that the guest may have modified the vring since
last time.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The one pending element is being freed but not discarded on device
reset, which causes svq->inuse to creep up, eventually hitting the
"Virtqueue size exceeded" error.
Properly discarding the element on device reset makes sure that its
buffers are unmapped and the inuse counter stays balanced.
Cc: Michael S. Tsirkin <mst@redhat.com>
Cc: Roman Kagan <rkagan@virtuozzo.com>
Cc: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
vq->inuse must be zeroed upon device reset like most other virtqueue
fields.
In theory, virtio_reset() just needs assert(vq->inuse == 0) since
devices must clean up in-flight requests during reset (requests cannot
not be leaked!).
In practice, it is difficult to achieve vq->inuse == 0 across reset
because balloon, blk, 9p, etc implement various different strategies for
cleaning up requests. Most devices call g_free(elem) directly without
telling virtio.c that the VirtQueueElement is cleaned up. Therefore
vq->inuse is not decremented during reset.
This patch zeroes vq->inuse and trusts that devices are not leaking
VirtQueueElements across reset.
I will send a follow-up series that refactors request life-cycle across
all devices and converts vq->inuse = 0 into assert(vq->inuse == 0) but
this more invasive approach is not appropriate for stable trees.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Cc: qemu-stable <qemu-stable@nongnu.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Ladi Prosek <lprosek@redhat.com>
Currently each VQ Notification Virtio Capability is allocated
on a different page. The idea is to enable split drivers within
guests, however there are no known plans to do that.
The allocation will result in a 8MB BAR, more than various
guest firmwares pre-allocates for PCI Bridges hotplug process.
Reserve 4 bytes per VQ by default and add a new parameter
"page-per-vq" to be used with split drivers.
Signed-off-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Some software algorithms are based on the hardware's cache info, for example,
for x86 linux kernel, when cpu1 want to wakeup a task on cpu2, cpu1 will trigger
a resched IPI and told cpu2 to do the wakeup if they don't share low level
cache. Oppositely, cpu1 will access cpu2's runqueue directly if they share llc.
The relevant linux-kernel code as bellow:
static void ttwu_queue(struct task_struct *p, int cpu)
{
struct rq *rq = cpu_rq(cpu);
......
if (... && !cpus_share_cache(smp_processor_id(), cpu)) {
......
ttwu_queue_remote(p, cpu); /* will trigger RES IPI */
return;
}
......
ttwu_do_activate(rq, p, 0); /* access target's rq directly */
......
}
In real hardware, the cpus on the same socket share L3 cache, so one won't
trigger a resched IPIs when wakeup a task on others. But QEMU doesn't present a
virtual L3 cache info for VM, then the linux guest will trigger lots of RES IPIs
under some workloads even if the virtual cpus belongs to the same virtual socket.
For KVM, there will be lots of vmexit due to guest send IPIs.
The workload is a SAP HANA's testsuite, we run it one round(about 40 minuates)
and observe the (Suse11sp3)Guest's amounts of RES IPIs which triggering during
the period:
No-L3 With-L3(applied this patch)
cpu0: 363890 44582
cpu1: 373405 43109
cpu2: 340783 43797
cpu3: 333854 43409
cpu4: 327170 40038
cpu5: 325491 39922
cpu6: 319129 42391
cpu7: 306480 41035
cpu8: 161139 32188
cpu9: 164649 31024
cpu10: 149823 30398
cpu11: 149823 32455
cpu12: 164830 35143
cpu13: 172269 35805
cpu14: 179979 33898
cpu15: 194505 32754
avg: 268963.6 40129.8
The VM's topology is "1*socket 8*cores 2*threads".
After present virtual L3 cache info for VM, the amounts of RES IPIs in guest
reduce 85%.
For KVM, vcpus send IPIs will cause vmexit which is expensive, so it can cause
severe performance degradation. We had tested the overall system performance if
vcpus actually run on sparate physical socket. With L3 cache, the performance
improves 7.2%~33.1%(avg:15.7%).
Signed-off-by: Longpeng(Mike) <longpeng2@huawei.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
PIO MR registration should use size from the correct notify struct.
Doesn't affect any visible behaviour because the field values are the
same (both are 4).
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
event_notifier_init() can fail in real life, for example when there
are not enough open file handles available (EMFILE) when using a lot
of devices. So instead of leaving the average user with a cryptic
error number only, print out a proper error message with strerror()
instead, so that the user has a better way to figure out what is
going on and that using "ulimit -n" might help here for example.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Fri 09 Sep 2016 05:54:35 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/docker-pull-request:
docker: silence debootstrap when --quiet is given
docker: build debootstrap after cloning
docker: make sure debootstrap is at least 1.0.67
docker: print warning if EXECUTABLE is not set when building debootstrap image
docker: debian-bootstrap.pre: print helpful message if DEB_ARCH/DEB_TYPE unset
docker: debian-bootstrap.pre: print error messages to stderr
docker: avoid dependency on 'realpath' package
docker.py: don't hang on large docker output
docker: Add a glib2-2.22 image
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some tests use the qtest protocol "memset" command with a zero
size, expecting it to do nothing. However in the current code this
will result in calling memset() with a NULL pointer, which is
undefined behaviour. Detect and specially handle zero sizes to
avoid this.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1470393800-7882-1-git-send-email-peter.maydell@linaro.org
In all cases, even when the dict doesn't contain 'ram', the qmp response
must be unref.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The path is allocated and should be freed.
The qmp response should be unref, but then 'machine' must be duplicated.
Use a destroy function for the PCTestData.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Allows one to specify a destroy function for the test data.
Add a fallback using glib g_test_add_vtable() internal function, whose
signature changed over time. Tested with glib 2.22, 2.26 and 2.48, which
according to git log should be enough to cover all variations.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Simplify a bit the code by using g_strdup_printf() and store it in a
non-const value so casting is no longer needed, and ownership is
clearer.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Further cleanup would need to call qemu_free_irq() at the appropriate
time, but for now this silences ASAN about direct leaks.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
The free_ranges array is used as a temporary pointer array, the segment
should still be freed, however, it shouldn't free the elements themself.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Tested-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
machine_class_base_init() member name is allocated by
machine_class_base_init(), but not freed by
machine_class_finalize(). Simply freeing there doesn't work,
because DEFINE_PC_MACHINE() overwrites it with a literal string.
Fix DEFINE_PC_MACHINE() not to overwrite it, and add the missing
free to machine_class_finalize().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
qemu_irq is already a pointer, no need to have an extra pointer level.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
The isa_register_portio_list() function allocates ioports
data/state. Let's keep the reference to this data on some owner. This
isn't enough to fix leaks, but at least, ASAN stops complaining of
direct leaks. Further cleanup would require calling
portio_list_del/destroy().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Free the config blacklist list, not just the elements. Do it so in the
more appropriate function config_free().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If we silence docker when --quiet is given, we should also silence the
.pre script (i.e. debootstrap).
Only discards stdout, so some diagnostics (e.g. from git clone) are
still printed. Most of the verbose output is gone however and this way
we still have a chance to see error messages.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-9-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
When using the git version of debootstrap (because no usable version
of debootstrap was installed on the host), we need to run 'make' so
that devices.tar.gz gets built. Otherwise the first debootstrap stage
will fail without printing any error message.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-8-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
The debian-bootstrap image doesn't choose a default architecture and
distribution version, instead the user has to set both DEB_ARCH and
DEB_TYPE in the environment. Print a reasonably helpful message if
either of them isn't set instead of complaining about "qemu-" being
missing or erroring out because we cannot cd to the mirror URL.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-5-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
The 'realpath' executable is shipped in a separate package that isn't
installed by default on some distros.
We already use 'readlink -e' (provided by GNU coreutils) in some other
part of the code, so let's settle for that instead.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-Id: <1473192351-601-3-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Unlike Popen.communicate(), subprocess.call() doesn't read from the
stdout file descriptor. If the child process produces more output than
fits into the pipe buffer, it will block indefinitely.
If we don't intend to consume the output, just send it straight to
/dev/null to avoid this issue.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Reviewed-by: Janosch Frank <frankja@linux.vnet.ibm.com>
Message-Id: <1473192351-601-2-git-send-email-silbe@linux.vnet.ibm.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
ppc patch queue for 2016-Sep-7
This is my first pull request for the newly opened qemu-2.8 tree. It
contains a heap of things that were too late for 2.7 and have been
queued for a while. In particular:
* A number of preliminary patches for the powernv machine type
* A substantial cleanup of exception handling which will be
necessary to support running a TCG with hypervisor
facilities
* A start on support for POWER9
* Some TCG implementations for new POWER9 instructions
* Some TCG and related cleanups in preparation for POWER9
* Some assorted TCG optimizations
* An implementation of the H_CHANGE_LOGICAL_LAN_MAC hypercall
which allows the MAC address to be changed on the PAPR virtual
NIC.
* Add some extra test cases for several machines (this isn't
strictly in the ppc code, but is most value to ppc)
NOTE: This pull request supersedes ppc-for-2.8-20160906, which had
some problems. Changes:
* Dropped BenH's lmw/stmw speedups, which break for
qemu-system-ppc64 on BE hosts
* A small fix to Thomas' serial output test to avoid a warning on
the isapc machine type.
* Some trivial checkpatch fixes
Note that some of the patches in this series still have large numbers
of checkpatch warnings. This is because they're moving existing code
that predates most of the checkpatch style conventions.
# gpg: Signature made Wed 07 Sep 2016 07:09:27 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.8-20160907: (64 commits)
tests: Check serial output of firmware boot of some machines
tests: Resort check-qtest entries in Makefile.include
spapr: implement H_CHANGE_LOGICAL_LAN_MAC h_call
ppc: Improve a few more helper flags
ppc: Improve the exception helpers flags
ppc: Improve flags for helpers loading/writing the time facilities
ppc: Don't generate dead code on unconditional branches
ppc: Stop dumping state on all exceptions in linux-user
ppc: Fix catching some segfaults in user mode
ppc: Fix macio ESCC legacy mapping
hw/ppc: add a ppc_create_page_sizes_prop() helper routine
hw/ppc: use error_report instead of fprintf
ppc: Rename #include'd .c files to .inc.c
target-ppc: add extswsli[.] instruction
target-ppc: add vsrv instruction
target-ppc: add vslv instruction
target-ppc: add vcmpnez[b,h,w][.] instructions
target-ppc: add vabsdu[b,h,w] instructions
target-ppc: add dtstsfi[q] instructions
target-ppc: implement branch-less divd[o][.]
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Some of the machines that we have got a firmware image for write
some output to the serial console while booting up. We can use
this output to make sure that the machine is basically working,
so this adds a test that checks the output of these machines
for some well-known "magic" strings.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The rather random list of check-qtest-xxx entries caused some
confusion in the past, where to use "=" and where to use "+="
(see commits 0ccac16f59 and 1f5c1cfbae
for example).
Sorting the check-qtest-xxx entries by architecure instead and
using some empty lines inbetween should help to ease this
situation a little bit, so that it is hopefully now obvious
that new tests should be added with "+=" instead of "=".
While we are at it, this patch also comments out two of the
"gcov-files-..." lines since the corresponding m48t59-test is
disabled for sparc and sparc64, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since kernel v4.0, linux uses H_CHANGE_LOGICAL_LAN_MAC to change lively
the MAC address of an ibmveth interface.
As QEMU doesn't implement this h_call, we can't change anymore the
MAC address of an spapr-vlan interface.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Mostly turn "store" type of helpers into TCG_CALL_NO_WG because
they can take exceptions. Also fixup_thrm doesn't read nor write
the tracked environment.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Those helpers never load from or store to the TCG tracked environment,
not do they generate synchronous exceptions (they might generate an
asynchronous interrupt but that's not an issue here).
So we can make them all use TCG_CALL_NO_RWG
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We are always generating the "else" case of the condition even when
generating an unconditional branch that will never hit it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Other archs don't do it, some programs catch signals just fine
and those dumps just clutter the output. Keep the dumps for cases
that aren't supposed to happen such as unknown codes.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The usermode "translate" code generates an error code value that
has the "is_write" bit set, which causes our switch/case to miss
and display "Invalid segfault errno" and a spurrious second state
dump. Fix it.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current mapping, while correct for the base ports (which is all the
driver uses these days), is wrong for the extended registers.
I suspect the bugs come from incorrect tables in the CHRP IO Ref document,
I have verified the new values here match Apple's MacTech.pdf.
Note: Nothing that I know of actually uses these registers so it's not a
huge deal, but this patch has the added advantage of adding comments to
document what the registers are.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adds following instructions:
vcmpnezb[.]: Vector Compare Not Equal or Zero Byte
vcmpnezh[.]: Vector Compare Not Equal or Zero Halfword
vcmpnezw[.]: Vector Compare Not Equal or Zero Word
Signed-off-by: Swapnil Bokade <bokadeswapnil@gmail.com>
[ collapse switch case ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
While implementing modulo instructions figured out that the
implementation uses many branches. Change the logic to achieve the
branch-less code. Undefined value is set to dividend in case of invalid
input.
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Some operations aren't allowed in LE mode, use a helper rather than
open coding the exception generation.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Use tlb_vaddr_to_host to do a fast path single translate for
the whole cache line. Also make the reservation check match
the entire range.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't need to call a helper for trap always and trap never
which are used by Linux under some circumstances.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
--
v2. Don't generate the helper call when trapping always
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The current alignment exception generation tries to load the opcode
to put in DSISR from a context where a cpu_ldl_code() is really not
a good idea. It might fault and longjmp out and that's not something
we want happening here.
Instead, pass the releavant opcode bits via the error_code.
There are a couple of cases of alignment interrupts that won't set
anything, the ones coming from access to direct store segments, but
that doesn't happen in practice, nobody used direct store segments
and they are gone from newer chips.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Move the NIP update to after the conditional branch so that we
don't do it if we aren't going to take the alignment exception
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is no longer necessary as the helpers will properly retrieve
the return address when needed.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is no longer necessary as the helpers will properly retrieve
the return address when needed.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is no longer necessary as the helpers will properly retrieve
the return address when needed. Also remove gen_update_current_nip()
which didn't seem to make much sense to me.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is no longer necessary as the helpers will properly retrieve
the return address when needed
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We need to pass it to the raise helper since we don't update it
before the calls.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead, pass GETPC() result to the corresponding helpers. This
requires a bit of fiddling to get the PC (hopefully) right in
the case where we generate a program check, though the hacks there
are temporary, a subsequent patch will clean this all up by always
having the nip already set to the right instruction when taking
the fault.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Fix trivial checkpatch warning]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We don't implement imprecise FP exceptions and using store_current
which sets SRR1 to the *previous* instruction never makes sense
for these. So let's be truthful and make them precise, which is
allowed by the architecture.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
This is no longer necessary as the helpers will properly retrieve
the return address.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of relying on NIP having been updated already.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[dwg: Fold in fix to mark function always_inline]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Instead of using the same helpers called from translate.c, let's have
a bunch of functions that take the various argument combinations,
especially the retaddr which will be needed in subsequent patches,
and leave the helpers to be just that, helpers for translate.c
We don't yet convert all users, we'll go through them in subsequent
patches.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
--
v2. Fix raise_exception_ra() to properly pass raddr
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ISA 3.0 has introduced EO - Expanded Opcode. Introduce third level
indirect opcode table and corresponding parsing routines.
EO (11:12) Expanded opcode field
Formats: XX1
EO (11:15) Expanded opcode field
Formats: VX, X, XX2
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[dwg: Trivial checkpatch fixup]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
maddhd: Multiply-Add High Doubleword
maddhdu: Multiply-Add High Doubleword Unsigned
Above two instruction are dual form and differ by 1 bit
(31st bit)
Multiplies two 64-bit registers (RA * RB), adds third register(RC) to
the result(quadword) and returns the higher dword in the target
register(RT).
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
maddld: Multiply-Add Low Doubleword
Multiplies two 64-bit registers (RA * RB), adds third register(RC) to
the result(quadword) and returns the lower dword in the target
register(RT).
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The CR number is provided in the opcode as - BFA (11:13)
Returns:
-1 if bit 0 of CR field is set
1 if bit 1 of CR field is set
0 otherwise.
Signed-off-by: Vivek Andrew Sha <vivekandrewsha@gmail.com>
[ reworded commit, used 32bit ops as crf is 32bits ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Adding following instructions for ISA3.0 support
modud: Modulo Unsigned Dword
modsd: Modulo Signed Dword
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
Reviewed-by: Richard Henderson <rth@twiddle.net
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The patch adds CPU PVR definition for POWER9 and enables QEMU to launch
guests/linux-user in TCG mode.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ Added POWER9 alias, POWER9 SPAPR core and dropped MMU defines ]
Signed-off-by: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
[dwg: Dropped sPAPR core type again for now]
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
spapr_pci would also be a good candidate but the macro _FDT is
slightly different. It returns and does not exit.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We abort a few lines above if kernel_xics_fd == -1.
This is only code cleanup.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
target-arm queue:
* fix incorrect LPAE bit in FSR for alignment faults
* ACPI: fix the AML ID format for CPU devices to work for
large numbers of CPUs
* ast2400: add memory controller device model
* m25p80: fix the vmstate structure name (migration break)
# gpg: Signature made Tue 06 Sep 2016 20:02:28 BST
# gpg: using RSA key 0x3C2525ED14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"
# gpg: aka "Peter Maydell <pmaydell@gmail.com>"
# gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>"
# Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* remotes/pmaydell/tags/pull-target-arm-20160906-1:
block: m25p80: Fix vmstate structure name
ARM: ACPI: fix the AML ID format for CPU devices
target-arm: Fix lpae bit in FSR on an alignment fault
ast2400: add a memory controller device model
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Current QEMU will stall guest VM booting under ACPI mode when vcpu count
is >= 12. Analyzing the booting log, it turns out that DSDT table can't
be loaded correctly due to "Invalid character(s) in name (0x62303043),
repaired: [C00*]". This is because existing QEMU uses a lower case AML
ID for CPU devices (e.g. C000, C001, ..., C00a, C00b). The ACPI code
inside guest VM detects this lower case character as an invalid character
(see acpi_ut_valid_acpi_char() in drivers/acpi/acpica/utstring.c file)
and converts it to "*". This causes duplicated IDs (i.e. "C00a" ==>"C00*"
and "C00b" ==> "C00*"). So ACPI refuses to load the table.
This patch fixes the problem by changing the format with a upper case
character. It matches the CPU ID formats used in other parts of QEMU
code.
Reported-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Wei Huang <wei@redhat.com>
Reviewed-by: Shannon Zhao <shannon.zhao@linaro.org>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Tested-by: Eric Auger <eric.auger@redhat.com>
Message-id: 1472852809-23042-1-git-send-email-wei@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The uboot in the previous release of the SDK was using a hardcoded
value for memory size. This is not true anymore, the value is now
retrieved from the memory controller.
Below is a model for this device, only supporting unlock and
configuration. Without it, we endup running a guest with 64MB, which
is a bit low nowdays. It uses a 'silicon-rev' property and ram_size to
build a default value. Some bits should be linked to SCU strapping
registers but it seems a bit complex to add for the current need.
The model is ready for the AST2500 SOC.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Block layer patches
# gpg: Signature made Tue 06 Sep 2016 11:38:01 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (36 commits)
block: Allow node name for 'qemu-io' HMP command
qemu-iotests: Log QMP traffic in debug mode
block jobs: Improve error message for missing job ID
coroutine: Assert that no locks are held on termination
coroutine: Let CoMutex remember who holds it
qcow2: fix iovec size at qcow2_co_pwritev_compressed
test-coroutine: Fix coroutine pool corruption
qemu-iotests: add vmdk for test backup compression in 055
qemu-iotests: test backup compression in 055
blockdev-backup: added support for data compression
drive-backup: added support for data compression
block: simplify blockdev-backup
block: simplify drive-backup
block/io: turn on dirty_bitmaps for the compressed writes
block: remove BlockDriver.bdrv_write_compressed
qcow: cleanup qcow_co_pwritev_compressed to avoid the recursion
qcow: add qcow_co_pwritev_compressed
vmdk: add vmdk_co_pwritev_compressed
qcow2: cleanup qcow2_co_pwritev_compressed to avoid the recursion
qcow2: add qcow2_co_pwritev_compressed
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
First (big) chunk of s390x updates:
- cpumodel support for s390x
- various fixes and improvements
# gpg: Signature made Tue 06 Sep 2016 16:09:53 BST
# gpg: using RSA key 0xDECF6B93C6F02FAF
# gpg: Good signature from "Cornelia Huck <huckc@linux.vnet.ibm.com>"
# gpg: aka "Cornelia Huck <cornelia.huck@de.ibm.com>"
# Primary key fingerprint: C3D0 D66D C362 4FF6 A8C0 18CE DECF 6B93 C6F0 2FAF
* remotes/cohuck/tags/s390x-20160906-v2: (38 commits)
s390x/cpumodel: implement QMP interface "query-cpu-model-baseline"
s390x/cpumodel: implement QMP interface "query-cpu-model-comparison"
s390x/cpumodel: implement QMP interface "query-cpu-model-expansion"
qmp: add QMP interface "query-cpu-model-baseline"
qmp: add QMP interface "query-cpu-model-comparison"
qmp: add QMP interface "query-cpu-model-expansion"
s390x/kvm: don't enable key wrapping if msa3 is disabled
s390x/kvm: let the CPU model control CMM(A)
s390x/kvm: disable host model for problematic compat machines
s390x/kvm: implement CPU model support
s390x/kvm: allow runtime-instrumentation for "none" machine
s390x/sclp: propagate hmfai
s390x/sclp: propagate the mha via sclp
s390x/sclp: propagate the ibc val (lowest and unblocked ibc)
s390x/sclp: indicate sclp features
s390x/sclp: introduce sclp feature blocks
s390x/sclp: factor out preparation of cpu entries
s390x/cpumodel: check and apply the CPU model
s390x/cpumodel: let the CPU model handle feature checks
s390x/cpumodel: expose features and feature groups as properties
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Let's implement that interface by reusing our conversion code and
lookup code for CPU definitions.
In order to find a compatible CPU model, we first detect the maximum
possible CPU generation and then try to find a maximum model, satisfying
all base features (not exceeding the maximum generation).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-31-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's implement that interface by reusing our convertion code implemented
for expansion.
We use CPU generations and CPU features to calculate the result. This
means, that a zEC12 cannot simply be converted into a z13 by stripping
of features. This is required, as other magic values (e.g. maximum
address sizes) belong to a CPU generation and cannot simply be
emulated by an older generation.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-30-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In order to expand CPU models, we create temporary cpus that handle the
feature/group parsing. Only CPU feature properties are expanded.
When converting the data structure back, we always fall back to the
static base CPU model, which is by definition migration-safe.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-29-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to baseline two CPU models, to
create a third, compatible one. This is especially helpful when two
CPU models are not identical, but a CPU model is required that is
guaranteed to run under both configurations, where the original models run.
"query-cpu-model-baseline" takes two CPU models and returns a third,
compatible model. The result will always be a static CPU model.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-28-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to compare two CPU models.
"query-cpu-model-compare" takes two models and returns how they compare
in a specific configuration.
The result will give guarantees about runnability. E.g. if a CPU model A
is a subset of CPU model B, model A is guaranteed to run in configurations
where model B runs, but not the other way around (might or might not run).
Usually, CPU features or CPU generations are used to calculate the result.
If a model is not guaranteed to run in a certain environment (e.g.
incompatible), a compatible one can be created by "baselining" both models
(follow up patch).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-27-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's provide a standardized interface to expand CPU models. This interface
can be used by tooling to get details about a specific CPU model in a
certain configuration, e.g. about the "host" model.
To take care of all architectures, two detail levels for an expansion
are introduced. Certain architectures might not support all detail levels.
While "full" will expand and indicate all relevant properties/features
of a CPU model, "static" expands to a static base CPU model, that will
never change between QEMU versions and therefore have the same features
when used under different compatibility machines.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-26-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
As the CPU model now controls msa3, trying to set wrapping keys without
msa3 being around/enable in the kernel will produce misleading errors.
So let's simply not configure key wrapping if msa3 is not enabled and
make compat machines with disabled CPU model work correctly.
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-25-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Starting with recent kernels, if the cmma attributes are available, we
actually have hardware support. Enabling CMMA then means providing the
guest VCPU with CMM, therefore enabling its CMM facility.
Let's not blindly enable CMM anymore but let's control it using CPU models.
For disabled CPU models, CMMA will continue to always get enabled.
Also enable it in the applicable default models.
Please note that CMM doesn't work with hugetlbfs, therefore we will
warn the user and keep it disabled. Migrating from/to a hugetlbfs
configuration works, as it will be disabled on both sides.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-24-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Compatibility machines that touch runtime-instrumentation should not
be used with the CPU model. Otherwise the host model will look different,
depending on the QEMU machine QEMU has been started with.
So let's simply disable the host model for existing compatibility machines
that all disable ri. This, in return, disables the CPU model for these
compat machines completely.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-23-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
We have to test if a configured CPU model is runnable in the current
configuration, and if not report why that is the case. This is done by
comparing it to the maximum supported model (host for KVM or z900 for TCG).
Also, we want to do some base sanity checking for a configured CPU model.
We'll cache the maximum model and the applied model (for performance
reasons and because KVM can only be configured before any VCPU is created).
For unavailable "host" model, we have to make sure that we inform KVM,
so it can do some compatibility stuff (enable CMMA later on to be precise).
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-13-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If we have certain features enabled, we have to migrate additional state
(e.g. vector registers or runtime-instrumentation registers). Let the
CPU model control that unless we have no "host" CPU model in the KVM
case. This will later on be the case for compatibility machines, so
migration from QEMU versions without the CPU model will still work.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-12-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's add all features and feature groups as properties to all CPU models.
If the "host" CPU model is unknown, we can neither query nor change
features. KVM will just continue to work like it did until now.
We will not allow to enable features that were not part of the original
CPU model, because that could collide with the IBC in KVM.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-11-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
A CPU model consists of a CPU definition, to which delta changes are
applied - features added or removed (e.g. z13-base,vx=on). In addition,
certain properties (e.g. cpu id) can later on change during migration
but belong into the CPU model. This data will later be filled from the
host model in the KVM case.
Therefore, store the configured CPU model inside the CPU instance, so
we can later on perform delta changes using properties.
For the "qemu" model, we emulate in TCG a z900. "host" will be
uninitialized (cpu->model == NULL) unless we have CPU model support in KVM
later on. The other models are all initialized from their definitions.
Only the "host" model can have a cpu->model == NULL.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-10-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch adds the CPU model definitions that are known on s390x -
like z900, zBC12 or z13. For each definition, introduce two CPU models:
1. Base model (e.g. z13-base): Minimum feature set we expect to be around
on all z13 systems. These models are migration-safe and will never
change.
2. Flexible models (e.g. z13): Models that can change between QEMU versions
and will be extended over time as we implement further features that
are already part of such a model in real hardware of certain
configurations.
We want to work on features using ordinary bitmap operations, however we
can't initialize a bitmap statically (unsigned long[] ...). Therefore we
store the generated feature lists in separate arrays and convert them to
proper bitmaps before registering all our CPU model classes.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-9-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Let's use the generated groups to create feature group representations for
the user. These groups can later be used to enable/disable multiple
features in one shot and will be used to reduce the amount of reported
features to the user if all subfeatures are in place.
We want to work on features using ordinary bitmap operations, however we
can't initialize a bitmap statically (unsigned long[] ...). Therefore
we store the generated feature lists in separate arrays and convert
them to a proper bitmaps before they will ever be used.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-8-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Feature groups will be very helpful to reduce the amount of features
typically available in sane configurations. E.g. the MSA facilities
introduced loads of subfunctions, which could - in theory - go away
in the future, but we want to avoid reporting hundrets of features to
the user if usually all of them are in place.
Groups only contain features that were introduced in one shot, not just
random features. Therefore, groups can never change. This is an important
property regarding migration.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-7-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch introduces the helper "gen-features" which allows to generate
feature list definitions at compile time. Its flexibility is better and the
error-proneness is lower when compared to static programming time added
statements.
The helper includes "target-s390x/cpu_features.h" to be able to use named
facility bits instead of numbers. The generated defines will be used for
the definition of CPU models.
We generate feature lists for each HW generation and GA for EC models. BC
models are always based on a EC version and have no separate definitions.
Base features: Features we expect to be always available in sane setups.
Migration safe - will never change. Can be seen as "minimum features
required for a CPU model".
Default features: Features we expect to be stable and around in latest
setups (e.g. having KVM support) - not migration safe.
Max features: All supported features that are theoretically allowed for a
CPU model. Exceeding these features could otherwise produce problems with
IBC (instruction blocking controls) in KVM.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
[generate base, default and models. renaming and cleanup]
Message-Id: <20160905085244.99980-6-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The patch introduces s390x CPU features (most of them refered to as
facilities) along with their discription and some functions that will be
helpful when working with the features later on.
Please note that we don't introduce all known CPU features, only the
ones currently supported by KVM + QEMU. We don't want to enable later
on blindly any facilities, for which we don't know yet if we need QEMU
support to properly support them (e.g. migrate additional state when
active). We can update QEMU later on.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael Mueller <mimu@linux.vnet.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
[reworked to include non-stfle features, added definitions]
Message-Id: <20160905085244.99980-5-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
This patch introduces two CPU models, "host" and "qemu".
"qemu" is used as default when running under TCG. "host" is used
as default when running under KVM. "host" cannot be used without KVM.
"host" is not migration-safe. They both inherit from the base s390x CPU,
which is turned into an abstract class.
This patch also changes CPU creation to take care of the passed CPU string
and reuses common code parse_features() function for that purpose. Unknown
CPU definitions are now reported. The "-cpu ?" and "query-cpu-definition"
commands are changed to list all CPU subclasses automatically, including
migration-safety and whether static.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-3-dahi@linux.vnet.ibm.com>
[CH: fix up self-assignments in s390_cpu_list, as spotted by clang]
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
x86 and memory backends queue, 2016-09-05
This includes a few features that were submitted just after hard
freeze, and a bug fix for memory backend initialization ordering.
# gpg: Signature made Mon 05 Sep 2016 20:50:14 BST
# gpg: using RSA key 0x2807936F984DC5A6
# gpg: Good signature from "Eduardo Habkost <ehabkost@redhat.com>"
# Primary key fingerprint: 5A32 2FD5 ABC4 D3DB ACCF D1AA 2807 936F 984D C5A6
* remotes/ehabkost/tags/x86-pull-request:
vl: Delay initialization of memory backends
vhost-user-test: Use libqos instead of pxe-virtio.rom
target-i386: Add more Intel AVX-512 instructions support
exec: Ensure the only one cpu_index allocation method is used
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Initialization of memory backends may take a while when
prealloc=yes is used, depending on their size. Initializing
memory backends before chardevs may delay the creation of monitor
sockets, and trigger timeouts on management software that waits
until the monitor socket is created by QEMU. See, for example,
the bug report at:
https://bugzilla.redhat.com/show_bug.cgi?id=1371211
In addition to that, allocating memory before calling
configure_accelerator() breaks the tcg_enabled() checks at
memory_region_init_*().
This patch fixes those problems by adding "memory-backend-*"
classes to the delayed-initialization list.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
vhost-user-test relies on iPXE just to initialize the virtio-net
device, and doesn't do any actual packet tx/rx testing.
In addition to that, the test relies on TCG, which is
imcompatible with vhost. The test only worked by accident: a bug
the memory backend initialization made memory regions not have
the DIRTY_MEMORY_CODE bit set in dirty_log_mask.
This changes vhost-user-test to initialize the virtio-net device
using libqos, and not use TCG nor pxe-virtio.rom.
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Make sure that cpu_index auto allocation isn't used in
combination with manual cpu_index assignment. And
dissallow out of order cpu removal if auto allocation
is in use.
Target that wishes to support out of order unplug should
switch to manual cpu_index assignment. Following patch
could be used as an example:
(pc: init CPUState->cpu_index with index in possible_cpus[]))
Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Removes the event state array used for early initialization. Since only
events with the "vcpu" property need a late initialization fixup,
threats their initialization specially.
Assumes that the user won't touch the state of "vcpu" events between
early and late initialization (e.g., through QMP).
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu>
Message-id: 147194273191.26836.14423079546263831356.stgit@fimbulvetr.bsc.es
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When using a node name, create a temporary BlockBackend that is used to
run the qemu-io command.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Python tests are already annoying enough to debug. With QMP traffic
available it's a little bit easier at least.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
If a block job is started with a node name rather than a device name and
no explicit job ID is passed, it was reported that '' isn't a
well-formed ID. Which is correct, but we can make the message a little
bit nicer.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
A coroutine that takes a lock must also release it again. If the
coroutine terminates without having released all its locks, it's buggy
and we'll probably run into a deadlock sooner or later. Make sure that
we don't get such cases.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
In cases of deadlocks, knowing who holds a given CoMutex is really
helpful for debugging. Keeping the information around doesn't cost much
and allows us to add another assertion to keep the code correct, so
let's just add it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Use bytes as the size would be more exact than s->cluster_size. Although
qemu_iovec_to_buf() will not allow to go beyond the qiov.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The test case overwrites the Coroutine object with 0xff as a way to
assert that the coroutine isn't used any more. However, this means that
the coroutine pool now contains a corrupted object and later test cases
may get this corrupted object and crash.
This patch saves the real content of the object and restores it after
completing the test. The only use of the coroutine pool between those
two points is the deletion of co2. As this only means an insertion at
the head of an SLIST (release_pool or alloc_pool), it doesn't access the
invalid list pointers that co1 has during this period.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The idea is simple - backup is "written-once" data. It is written block
by block and it is large enough. It would be nice to save storage
space and compress it.
The patch adds a flag to the qmp/hmp drive-backup command which enables
block compression. Compression should be implemented in the format driver
to enable this feature.
There are some limitations of the format driver to allow compressed writes.
We can write data only once. Though for backup this is perfectly fine.
These limitations are maintained by the driver and the error will be
reported if we are doing something wrong.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Jeff Cody <jcody@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: John Snow <jsnow@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
For bdrv_pwrite_compressed() it looks like most of the code creating
coroutine is duplicated in bdrv_prwv_co(). So we can just add a flag
(BDRV_REQ_WRITE_COMPRESSED) and use bdrv_prwv_co() as a generic one.
In the end we get coroutine oriented function for write compressed by using
bdrv_co_pwritev/blk_co_pwritev with BDRV_REQ_WRITE_COMPRESSED flag.
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Jeff Cody <jcody@redhat.com>
CC: Markus Armbruster <armbru@redhat.com>
CC: Eric Blake <eblake@redhat.com>
CC: John Snow <jsnow@redhat.com>
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There is no reason why an NBD server couldn't be started for any node,
even if it's not on the top level. This converts nbd-server-add to
accept a node-name.
Note that there is a semantic difference between using a BlockBackend
name and the node name of its root: In the former case, the NBD server
is closed on eject; in the latter case, the NBD server doesn't drop its
reference and keeps the image file open this way.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
The builtin NBD server uses its own BlockBackend now instead of reusing
the monitor/guest device one.
This means that it has its own writethrough setting now. The builtin
NBD server always uses writeback caching now regardless of whether the
guest device has WCE enabled. qemu-nbd respects the cache mode given on
the command line.
We still need to keep a reference to the monitor BB because we put an
eject notifier on it, but we don't use it for any I/O.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
drive-mirror to accept a node-name without lifting the restriction that
we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
drive-backup and the corresponding transaction action to accept a
node-name without lifting the restriction that we're operating at a root
node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
change-backing-file to accept a node-name without lifting the
restriction that we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-snapshot-internal-sync to accept a node-name without lifting
the restriction that we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-snapshot-delete-internal-sync to accept a node-name without
lifting the restriction that we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-mirror to accept a node-name without lifting the restriction
that we're operating at a root node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
blockdev-backup and the corresponding transaction action to accept a
node-name without lifting the restriction that we're operating at a root
node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
block-commit to accept a node-name without lifting the restriction that
we're operating at a root node.
As libvirt makes use of the DeviceNotFound error class, we must add
explicit code to retain this behaviour because qmp_get_root_bs() only
returns GenericErrors.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
In order to remove the necessity to use BlockBackend names in the
external API, we want to allow node-names everywhere. This converts
block-stream to accept a node-name without lifting the restriction that
we're operating at a root node.
In case of an invalid device name, the command returns the GenericError
error class now instead of DeviceNotFound, because this is what
qmp_get_root_bs() returns.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
This allows the creation of an empty scsi-cd device without manually
creating a BlockBackend.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
This allows the creation of an empty ide-cd device without manually
creating a BlockBackend.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Acked-by: Eric Blake <eblake@redhat.com>
It might be of interest for tooling whether a CPU definition can be safely
used when migrating, or if e.g. CPU features might get lost during
migration when migrationg from/to a different QEMU version or host, even if
the same compatibility machine is used.
Also, we want to know if a CPU definition is static and will never change.
Beause these definitions can then be used independantly of a compatibility
machine and will always have the same feature set, they can e.g. be used
to indicate the "host" model in libvirt later on.
Let's add two return values to query-cpu-definitions, stating for each
returned CPU definition, if it is migration-safe and if it is static.
While "migration-safe" is optional, "static" will be set to "false"
automatically by all implementing architectures. If a model really was
static all the time and will be in the future, this can simply be changed
later.
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Message-Id: <20160905085244.99980-2-dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Diag 501 (4 bytes) was used until now for software breakpoints on s390.
As instructions on s390 might be 2 bytes long, temporarily overwriting them
with 4 bytes is evil and can result in very strange guest behaviour.
We make use of invalid instruction 0x0000 as new sw breakpoint instruction.
We have to enable interception of that instruction in KVM using a
capability.
If no software breakpoint has been inserted at the reported position, an
operation exception has to be injected into the guest. Otherwise a
breakpoint has been hit and the pc has to be rewound.
If KVM doesn't yet support interception of instruction 0x0000 the
existing mechanism exploiting diag 501 is used. To keep overhead low,
interception of instruction 0x0000 will only be enabled if sw breakpoints
are really used.
Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The cssid 255 is reserved but still valid from an architectural
point of view. However, feeding a bogus schid of 0xffffffff into
the virtio hypercall will lead to a crash:
Stack trace of thread 138363:
#0 0x00000000100d168c css_find_subch (qemu-system-s390x)
#1 0x00000000100d3290 virtio_ccw_hcall_notify
#2 0x00000000100cbf60 s390_virtio_hypercall
#3 0x000000001010ff7a handle_hypercall
#4 0x0000000010079ed4 kvm_cpu_exec (qemu-system-s390x)
#5 0x00000000100609b4 qemu_kvm_cpu_thread_fn
#6 0x000003ff8b887bb4 start_thread (libpthread.so.0)
#7 0x000003ff8b78df0a thread_start (libc.so.6)
This is because the css array was only allocated for 0..254
instead of 0..255.
Let's fix this by bumping MAX_CSSID to 255 and fencing off the
reserved cssid of 255 during css image allocation.
Reported-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
As we provide format 1 chsc scpd data (and don't support any ficon
channels), we de facto already have the ficon-cascaded-switch
facility.
Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
With the current code a simple sclp command takes about 13000 ns
The biggest part seems to be the resolver of the object model. By
caching the sclp device the time for an sclp command goes down to
2500ns. Talking about real life scenarios, this change doubles
the speed of the sclp console when sending single bytes outputs
to /dev/console.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
If one pci device is plugged successfully, there must be a zpci device
existing. This means that during hot-unplugging a pci device, its
corresponding zpci device must be found. Therefore we use an assert to
replace current code.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
In the case that zpci is automatically created, we did not return
immediately on failure, which would lead to NULL pointer dereferencing.
Let's fix it.
Signed-off-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
The qdisk implementation is using the native xenbus protocol only in
case of no protocol specified at all. As using the explicit 32- or
64-bit protocol is slower than the native one due to copying requests
not by memcpy but element for element, this is not optimal.
Correct this by using the native protocol in case word sizes of
frontend and backend match.
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
The 9P spec at http://man.cat-v.org/plan_9/5/intro says:
All directories must support walks to the directory .. (dot-dot) meaning
parent directory, although by convention directories contain no explicit
entry for .. or . (dot). The parent of the root directory of a server's
tree is itself.
This means that a client cannot walk further than the root directory
exported by the server. In other words, if the client wants to walk
"/.." or "/foo/../..", the server should answer like the request was
to walk "/".
This patch just does that:
- we cache the QID of the root directory at attach time
- during the walk we compare the QID of each path component with the root
QID to detect if we're in a "/.." situation
- if so, we skip the current component and go to the next one
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
According to the 9P spec http://man.cat-v.org/plan_9/5/open about the
create request:
The names . and .. are special; it is illegal to create files with these
names.
This patch causes the create and lcreate requests to fail with EINVAL if
the file name is either "." or "..".
Even if it isn't explicitly written in the spec, this patch extends the
checking to all requests that may cause a directory entry to be created:
- mknod
- rename
- renameat
- mkdir
- link
- symlink
The unlinkat request also gets patched for consistency (even if
rmdir("foo/..") is expected to fail according to POSIX.1-2001).
The various error values come from the linux manual pages.
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Empty path components don't make sense for most commands and may cause
undefined behavior, depending on the backend.
Also, the walk request described in the 9P spec [1] clearly shows that
the client is supposed to send individual path components: the official
linux client never sends portions of path containing the / character for
example.
Moreover, the 9P spec [2] also states that a system can decide to restrict
the set of supported characters used in path components, with an explicit
mention "to remove slashes from name components".
This patch introduces a new name_is_illegal() helper that checks the
names sent by the client are not empty and don't contain unwanted chars.
Since 9pfs is only supported on linux hosts, only the / character is
checked at the moment. When support for other hosts (AKA. win32) is added,
other chars may need to be blacklisted as well.
If a client sends an illegal path component, the request will fail and
ENOENT is returned to the client.
[1] http://man.cat-v.org/plan_9/5/walk
[2] http://man.cat-v.org/plan_9/5/intro
Suggested-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reproducer:
CFLAGS="-g3 -O0" ./configure --target-list=aarch64-softmmu,arm-softmmu --enable-vhost-net --enable-virtfs
Here CFLAGS ends up with "-O2 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=2 ... -g3 -O0"
and pc-bios/optionrom/Makefile forgets to add the -O2 it needs.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since commit 7e8449594c, the socket connect code is blocking, because
calling socket_connect() without callback is blocking. This reverts the
commit.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In previous commit
commit c7628bff41
Author: Gerd Hoffmann <kraxel@redhat.com>
Date: Fri Oct 30 12:10:09 2015 +0100
vnc: only alloc server surface with clients connected
the VNC server was changed so that the 'vd->server' pixman
image was only allocated when a client is connected.
Since then if a client disconnects and then reconnects to
the VNC server all they will see is a black screen until
they do something that triggers a refresh. On a graphical
desktop this is not often noticed since there's many things
going on which cause a refresh. On a plain text console it
is really obvious since nothing refreshes frequently.
The problem is that the VNC server didn't update the guest
dirty bitmap, so still believes its server image is in sync
with the guest contents.
To fix this we must explicitly mark the entire guest desktop
as dirty after re-creating the server surface. Move this
logic into vnc_update_server_surface() so it is guaranteed
to be call in all code paths that re-create the surface
instead of only in vnc_dpy_switch()
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Peter Lieven <pl@kamp.de>
Tested-by: Peter Lieven <pl@kamp.de>
Message-id: 1471365032-18096-1-git-send-email-berrange@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
virtqueue_discard() moves vq->last_avail_idx back so the element can be
popped again. It's necessary to decrement vq->inuse to avoid "leaking"
the element count.
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The vq->inuse field is not migrated. Many devices don't hold
VirtQueueElements across migration so it doesn't matter that vq->inuse
starts at 0 on the destination QEMU.
At least virtio-serial, virtio-blk, and virtio-balloon migrate while
holding VirtQueueElements. For these devices we need to recalculate
vq->inuse upon load so the value is correct.
Cc: qemu-stable@nongnu.org
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Mon 22 Aug 2016 09:06:32 BST
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
e1000e: remove internal interrupt flag
slirp: fix segv when init failed
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since commit f6c2e66ae8, slirp uses an exit notifier to call
slirp_smb_cleanup. However, if init() failed, the notifier isn't added,
and removing it will fail:
==18447== Invalid write of size 8
==18447== at 0x7EF2B5: notifier_remove (notify.c:32)
==18447== by 0x48E80C: qemu_remove_exit_notifier (vl.c:2661)
==18447== by 0x6A2187: net_slirp_cleanup (slirp.c:134)
==18447== by 0x69419D: qemu_cleanup_net_client (net.c:338)
==18447== by 0x69445B: qemu_del_net_client (net.c:401)
==18447== by 0x6A2B81: net_slirp_init (slirp.c:366)
==18447== by 0x6A4241: net_init_slirp (slirp.c:865)
==18447== by 0x695C6D: net_client_init1 (net.c:1051)
==18447== by 0x695F6E: net_client_init (net.c:1108)
==18447== by 0x696DBA: net_init_netdev (net.c:1498)
==18447== by 0x7F1F99: qemu_opts_foreach (qemu-option.c:1116)
==18447== by 0x696E60: net_init_clients (net.c:1516)
==18447== Address 0x0 is not stack'd, malloc'd or (recently) free'd
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Since f6880b7f [qemu-log: support simple pid substitution for logs],
test-logging creates files with hard-coded names in /tmp. In the best
case, this prevents multiple developers from running "make check" on
the same machine. In the worst case, it allows for symlink attacks,
enabling an attacker to overwrite files that are writable to the
developer running "make check".
Instead of hard-coding the paths, create a temporary directory using
g_dir_make_tmp() and clean it up afterwards.
Fixes: f6880b7f ("qemu-log: support simple pid substitution for logs")
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1471545963-11720-3-git-send-email-silbe@linux.vnet.ibm.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We're going to make use of g_dir_make_tmp() in test-logging. Provide a
compatibility implementation of it for glib < 2.30.
May behave differently in some edge cases (e.g. pattern only at the
end of the template, the file name is not part of the error message),
but good enough in practice.
Signed-off-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Message-id: 1471545963-11720-2-git-send-email-silbe@linux.vnet.ibm.com
[PMM: removed variable "template" which caused compilation failures
when C++ files include glib-compat.h]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
In 9c37146782 I've tried to fix a broken build with older
linux-headers. However, I didn't do it properly. The solution
implemented here is to grab the enums that caused the problem
initially, and rename their values so that they are "QEMU_"
prefixed. In order to guarantee matching values with actual
enums from linux-headers, the enums are seeded with starting
values from the original enums.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Laurent Vivier <laurent@vivier.eu>
Message-id: 75c14d6e8a97c4ff3931d69c13eab7376968d8b4.1471593869.git.mprivozn@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch reduce CPU usage of flush operations a bit. When we have one
flush completed we should kick only next operation. We should not start
all pending operations in the hope that they will go back to wait on
wait_queue.
Also there is a technical possibility that requests will get reordered
with the previous approach. After wakeup all requests are removed from
the wait queue. They become active and they are processed one-by-one
adding to the wait queue in the same order. Though new flush can arrive
while all requests are not put into the queue.
Signed-off-by: Denis V. Lunev <den@openvz.org>
Tested-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Message-id: 1471457214-3994-3-git-send-email-den@openvz.org
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
The following commit
commit 3ff2f67a7c
Author: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Date: Mon Jul 18 22:39:52 2016 +0300
block: ignore flush requests when storage is clean
has introduced a regression.
There is a problem that it is still possible for 2 requests to execute
in non sequential fashion and sometimes this results in a deadlock
when bdrv_drain_one/all are called for BDS with such stalled requests.
1. Current flushed_gen and flush_started_gen is 1.
2. Request 1 enters bdrv_co_flush to with write_gen 1 (i.e. the same
as flushed_gen). It gets past flushed_gen != flush_started_gen and
sets flush_started_gen to 1 (again, the same it was before).
3. Request 1 yields somewhere before exiting bdrv_co_flush
4. Request 2 enters bdrv_co_flush with write_gen 2. It gets past
flushed_gen != flush_started_gen and sets flush_started_gen to 2.
5. Request 2 runs to completion and sets flushed_gen to 2
6. Request 1 is resumed, runs to completion and sets flushed_gen to 1.
However flush_started_gen is now 2.
From here on out flushed_gen is always != to flush_started_gen and all
further requests will wait on flush_queue. This change replaces
flush_started_gen with an explicitly tracked active flush request.
Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com>
Signed-off-by: Denis V. Lunev <den@openvz.org>
Message-id: 1471457214-3994-2-git-send-email-den@openvz.org
CC: Stefan Hajnoczi <stefanha@redhat.com>
CC: Fam Zheng <famz@redhat.com>
CC: Kevin Wolf <kwolf@redhat.com>
CC: Max Reitz <mreitz@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
# gpg: Signature made Thu 18 Aug 2016 06:36:16 BST
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
net/net: properly handle multiple packets in net_fill_rstate()
net: vmxnet: use g_new for pkt initialisation
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Fix 'make docker-test-mingw@fedora'
Peter,
This is the single patch that stalls patchew's mingw testing. Since it
is small and trivial, let's have it in 2.7.
Fam
# gpg: Signature made Wed 17 Aug 2016 13:13:53 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/docker-pull-request:
curl: Cast fd to int for DPRINTF
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When network is busy, we will receive multiple packets at one time. In
that situation, we should keep trying to do the receiving instead of
finalizing only the first packet.
Signed-off-by: Zhang Chen <zhangchen.fnst@cn.fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian@cn.fujitsu.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
When network transport abstraction layer initialises pkt, the maximum
fragmentation count is not checked. This could lead to an integer
overflow causing a NULL pointer dereference. Replace g_malloc() with
g_new() to catch the multiplication overflow.
Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Acked-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Currently "make docker-test-mingw@fedora" has a warning like:
/tmp/qemu-test/src/block/curl.c: In function 'curl_sock_cb':
/tmp/qemu-test/src/block/curl.c:172:6: warning: format '%d' expects
argument of type 'int', but argument 4 has type 'curl_socket_t {aka long
long unsigned int}'
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
^
cc1: all warnings being treated as errors
Cast to int to suppress it.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <1470027888-24381-1-git-send-email-famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
The llseek syscall takes two 32-bit arguments, offset_high
and offset_low, which must be combined to form a single
64-bit offset. Unfortunately we were combining them with
(uint64_t)arg2 << 32) | arg3
and arg3 is a signed type; this meant that when promoting
arg3 to a 64-bit type it would be sign-extended. The effect
was that if the offset happened to have bit 31 set then
this bit would get sign-extended into all of bits 63..32.
Explicitly cast arg3 to abi_ulong to avoid the erroneous
sign extension.
Reported-by: Chanho Park <parkch98@gmail.com>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Tested-by: Chanho Park <parkch98@gmail.com>
Message-id: 1470938379-1133-1-git-send-email-peter.maydell@linaro.org
In c5dff280 we tried to make us understand netlink messages more.
So we've added a code that does some translation. However, the
code assumed linux-headers to be at least version 4.4 of it
because most of the symbols there (if not all of them) were added
in just that release. This, however, breaks build on systems with
older versions of the package.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Message-id: 23806aac6db3baf7e2cdab4c62d6e3468ce6b4dc.1471340849.git.mprivozn@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The kvm_pv_unhalt feature doesn't work if kernel_irqchip is
disabled, so we need to report it as unsupported.
Tested-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
Since commit d7a04fd7d5, tcp_chr_wait_connected() was introduced,
so vhost-user could wait until a backend started successfully. In
vhost-user case, the chr socket must be plain unix, and the chr+vhost
setup happens synchronously during qemu startup.
However, with TLS and telnet socket, initial socket setup happens
asynchronously, and s->connected is not set after the socket is
accepted. In order for tcp_chr_wait_connected() to not keep accepting
new connections and proceed with the last accepted socket, it can
check for s->ioc instead.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 20160816083332.15088-1-marcandre.lureau@redhat.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The virtio-gpu.h file defines a macro VIRTIO_GPU_FILL_CMD
which includes a call to qemu_log_mask, but does not
include qemu/log.h. In a default configure, it is lucky
and gets qemu/log.h indirectly due to the 'log' trace
backend being enabled. If that trace backend is disabled
though, eg
./configure --enable-trace-backends=nop
Then the build will fail:
In file included from /home/berrange/src/virt/qemu/hw/display/virtio-gpu-3d.c:19:0:
/home/berrange/src/virt/qemu/hw/display/virtio-gpu-3d.c: In function ‘virgl_cmd_create_resource_2d’:
/home/berrange/src/virt/qemu/include/hw/virtio/virtio-gpu.h:138:13: error: implicit declaration of function ‘qemu_log_mask’ [-Werror=implicit-function-declaration]
qemu_log_mask(LOG_GUEST_ERROR, \
^
/home/berrange/src/virt/qemu/hw/display/virtio-gpu-3d.c:34:5: note: in expansion of macro ‘VIRTIO_GPU_FILL_CMD’
VIRTIO_GPU_FILL_CMD(c2d);
^~~~~~~~~~~~~~~~~~~
/home/berrange/src/virt/qemu/hw/display/virtio-gpu-3d.c:34:5: error: nested extern declaration of ‘qemu_log_mask’ [-Werror=nested-externs]
In file included from /home/berrange/src/virt/qemu/hw/display/virtio-gpu-3d.c:19:0:
/home/berrange/src/virt/qemu/include/hw/virtio/virtio-gpu.h:138:27: error: ‘LOG_GUEST_ERROR’ undeclared (first use in this function)
qemu_log_mask(LOG_GUEST_ERROR, \
[snip many more errors]
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-id: 1470648700-3474-1-git-send-email-berrange@redhat.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Block layer patches for 2.7.0-rc3
# gpg: Signature made Mon 15 Aug 2016 14:55:46 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream:
iotests: Test case for wrong runtime option types
block/nbd: Store runtime option values
block/blkdebug: Store config filename
block/nbd: Use QemuOpts for runtime options
block/ssh: Use QemuOpts for runtime options
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Since
commit a9c87304b7 ("build-sys: fix building with make CFLAGS=.. argument")
pc-bios/s390-ccw.img build might fail with
--- snip ---
main.o: In function `virtio_setup':
qemu/pc-bios/s390-ccw/main.c:117: undefined reference to `__stack_chk_fail'
--- snip ---
Changing the CFLAGS to QEMU_CFLAGS does the trick. We also need to
add -fno-strict-aliasing as this was filtered out.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Message-Id: <1471258997-5811-1-git-send-email-borntraeger@de.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
ppc patch queue for 2016-08-15
Just a single patch here, I hope this is the last ppc / spapr fix to
squeeze into qemu-2.7.
# gpg: Signature made Mon 15 Aug 2016 07:46:36 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160815:
ppc: parse cpu features once
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The various host OSes are irritatingly variable about the name
of the linker emulation we need to pass to ld's -m option to
build the i386 option ROMs. Instead of doing this via a
CONFIG ifdef, check in configure whether any of the emulation
names we know about will work and pass the right answer through
to the makefile. If we can't find one, we fall back to not trying
to build the option ROMs, in the same way we would for a non-x86
host platform.
This is in particular necessary to unbreak the build on OpenBSD,
since it wants a different answer to FreeBSD and we don't have
an existing CONFIG_ variable that distinguishes the two.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Sean Bruno <sbruno@freebsd.org>
Message-id: 1470672688-6754-1-git-send-email-peter.maydell@linaro.org
Store the runtime option values in the BDRVNBDState so they can later be
used in nbd_refresh_filename() without having to directly access the
options QDict which may contain values of non-string types.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Store the configuration file's filename so it can later be used in
bdrv_refresh_filename() without having to directly access the options
QDict which may contain a value of a non-string type.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Using QemuOpts will prevent qemu from crashing if the input options have
not been validated (which is the case when they are specified on the
command line or in a json: filename) and some have the wrong type.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Using QemuOpts will prevent qemu from crashing if the input options have
not been validated (which is the case when they are specified on the
command line or in a json: filename) and some have the wrong type.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Considering that features are converted to global properties and
global properties are automatically applied to every new instance
of created CPU (at object_new() time), there is no point in
parsing cpu_model string every time a CPU created. So move
parsing outside CPU creation loop and do it only once.
Parsing also should be done before any CPU is created so that
features would affect the first CPU a well.
This patch does that for all PowerPC machine types.
It is based on previous work from Bharata:
https://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg07564.html
Signed-off-by: Greg Kurz <groug@kaod.org>
[clg: only kept the fix for the spapr platform. support for other
platform will be added in 2.8 ]
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
VMs created on older versions on Xen will not have been provisioned with
pages to support creation of non-default ioreq servers. In this case
the ioreq server API is not supported and QEMU's only option is to fall
back to using the default ioreq server pages as it did prior to
commit 3996e85c ("Xen: Use the ioreq-server API when available").
This patch therefore changes the code in xen_common.h to stop considering
a failure of xc_hvm_create_ioreq_server() as a hard failure but simply
as an indication that the guest is too old to support the ioreq server
API. Instead a boolean is set to cause reversion to old behaviour such
that the default ioreq server is then used.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
emu_regs is a pointer, ARRAY_SIZE doesn't return what we expect.
Since the remaining message is enough for debugging, so just remove it.
Also tweaked the message a little.
Signed-off-by: Cao jin <caoj.fnst@cn.fujitsu.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
Currently the -version command line argument prints a string ending
with "Copyright (c) 2003-2008 Fabrice Bellard". This is now some
eight years out of date; abstract it out of the several places that
print the string and update it to:
Copyright (c) 2003-2016 Fabrice Bellard and the QEMU Project developers
to reflect the work by all the QEMU Project contributors over the
last decade.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1470309276-5012-1-git-send-email-peter.maydell@linaro.org
The virtio-console.c file handles both serial consoles
and interactive consoles, since they're backed by the
same device model.
Since serial devices are expected to be reliable and
need to notify the guest when the backend is opened
or closed, the virtio-console.c file wires up support
for chardev events. This affects both serial consoles
and interactive consoles, using a network connection
based chardev backend such as 'socket', but not when
using a PTY based backend or plain 'file' backends.
When the host side is not connected the handle_output()
method in virtio-serial-bus.c will drop any data sent
by the guest, before it even reaches the virtio-console.c
code. This means that if the chardev has a logfile
configured, the data will never get logged.
Consider for example, configuring a x86_64 guest with a
plain UART serial port
-chardev socket,id=charserial1,host=127.0.0.1,port=9001,server,nowait,logfile=console1.log,logappend=on
-device isa-serial,chardev=charserial1,id=serial1
vs a s390 guest which has to use the virtio-console port
-chardev socket,id=charconsole1,host=127.0.0.1,port=9000,server,nowait,logfile=console2.log,logappend=on
-device virtconsole,chardev=charconsole1,id=console1
The isa-serial one gets data written to the log regardless
of whether a client is connected, while the virtioconsole
one only gets data written to the log when a client is
connected.
There is no need for virtio-serial-bus.c to aggressively
drop the data for console devices, as the chardev code is
prefectly capable of discarding the data itself.
So this patch changes virtconsole devices so that they
are always marked as having the host side open. This
ensures that the guest OS will always send any data it
has (Linux virtio-console hvc driver actually ignores
the host open state and sends data regardless, but we
should not rely on that), and also prevents the
virtio-serial-bus code prematurely discarding data.
The behaviour of virtserialport devices is *not* changed,
only virtconsole, because for the former, it is important
that the guest OSknow exactly when the host side is opened
/ closed so it can do any protocol re-negotiation that may
be required.
Fixes bug: https://bugs.launchpad.net/qemu/+bug/1599214
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <1470241360-3574-2-git-send-email-berrange@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
It is generally not expected that io_submit() fails other than with
-EAGAIN, but corner cases like SELinux refusing I/O when permissions are
revoked are still possible. In this case, we shouldn't abort, but just
return an I/O error for the request.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 1470741619-23231-1-git-send-email-kwolf@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* pc-bios/optionrom/Makefile fixes
* warning fixes for __atomic_load and -1 << x in clang
* missed interrupt fix from Gonglei
* checkpatch fix from Radim and myself
# gpg: Signature made Wed 10 Aug 2016 14:54:31 BST
# gpg: using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream:
checkpatch: default to success if only warnings
checkpatch: bump most warnings to errors
CODING_STYLE, checkpatch: update line length rules
checkpatch: check for CVS keywords on all sources
checkpatch: tweak the files in which TABs are checked
timer: set vm_clock disabled default
checkpatch: ignore automatically imported Linux headers
clang: Fix warning reg. expansion to 'defined'
Disable warn about left shifts of negative values
atomic: strip "const" from variables declared with typeof
optionrom: fix compilation with mingw docker target
optionrom: add -fno-stack-protector
build-sys: fix building with make CFLAGS=.. argument
linuxboot_dma: avoid guest ABI breakage on gcc vs. clang compilation
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The set_mem_table command currently does not seek a reply. Hence, there is
no easy way for a remote application to notify to QEMU when it finished
setting up memory, or if there were errors doing so.
As an example:
(1) Qemu sends a SET_MEM_TABLE to the backend (eg, a vhost-user net
application). SET_MEM_TABLE does not require a reply according to the spec.
(2) Qemu commits the memory to the guest.
(3) Guest issues an I/O operation over a new memory region which was configured on (1).
(4) The application has not yet remapped the memory, but it sees the I/O request.
(5) The application cannot satisfy the request because it does not know about those GPAs.
While a guaranteed fix would require a protocol extension (committed separately),
a best-effort workaround for existing applications is to send a GET_FEATURES
message before completing the vhost_user_set_mem_table() call.
Since GET_FEATURES requires a reply, an application that processes vhost-user
messages synchronously would probably have completed the SET_MEM_TABLE before replying.
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This introduces the VHOST_USER_PROTOCOL_F_REPLY_ACK.
If negotiated, client applications should send a u64 payload in
response to any message that contains the "need_reply" bit set
on the message flags. Setting the payload to "zero" indicates the
command finished successfully. Likewise, setting it to "non-zero"
indicates an error.
Currently implemented only for SET_MEM_TABLE.
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Prerna Saxena <prerna.saxena@nutanix.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
'vhost_set_vring_enable()' tries to call function using pointer to
'vhost_ops' which can be already zeroized in 'vhost_dev_cleanup()'
while vhost disconnection.
Fix that by checking 'vhost_ops' before using. This fixes QEMU crash
on calling 'ethtool -L eth0 combined 2' if vhost disconnected.
Signed-off-by: Ilya Maximets <i.maximets@samsung.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
ppc patch queue for 2016-08-10
Here are some more last minute PAPR and ppc related fixes for
qemu-2.7. One patch makes compressed memory dumps work with guest
kernels using page sizes up to 64KiB. This is important since most
current pseries guests use a 64KiB default page size. The remainder
fix a regression with handling of CPU aliases which causes serious
problem for libvirt.
# gpg: Signature made Wed 10 Aug 2016 06:44:27 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.7-20160810:
ppc/kvm: Register also a generic spapr CPU core family type
ppc/kvm: Do not mess up the generic CPU family registration
hw/ppc/spapr: Look up CPU alias names instead of hard-coding the aliases
ppc: Introduce a function to look up CPU alias strings
spapr: remove extra type variable
ppc64: fix compressed dump with pseries kernel
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
CHK-level checks have been removed from checkpatch or bumped to
errors, so there is no effect anymore for --strict/--subjective.
Furthermore, even most WARNs have been bumped to errors, with
WARN only reserved to things that patchew probably ought not
to complain about (and that maintainers probably will notice
anyway during review if they are extreme).
Default to exiting with success even if there are WARN-level
failures, and cause --strict to fail for warnings. Maintainers
that want to have a strict 80-character limit for their subsystem
can add it to a commit hook for example.
The --subjective synonym is removed.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This only leaves a warning-level message for the extra-long lines
soft limit. Everything else is bumped up.
In the future warnings can be added for checks that can have false
positives.
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Line lengths above 80 characters do exist. They are rare, but
they happen from time to time. An ignored rule is worse than an
exception to the rule, so do the latter.
Some on the list expressed their preference for a soft limit that
is slightly lower than 80 characters, to account for extra characters
in unified diffs (including three-way diffs) and for email quoting.
However, there was no consensus on this so keep the 80-character
soft limit and add a hard limit at 90.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
These should apply to all files, not just C/C++. Tweak the regular
expression to check for whole words, to avoid false positives on Perl
variables starting with "Id".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Include Python and shell scripts, and make an exception for Perl
scripts we imported from Linux or elsewhere.
Acked-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
There is a regression with the "-cpu" parameter introduced by
the spapr CPU hotplug code: We used to allow to specify a
"CPU family" name with the "-cpu" parameter when running on KVM so
that the user does not need to know the gory details of the exact
CPU version of the host CPU. For example, it was possible to
use "-cpu POWER8" on a POWER8E host CPU. This behavior does not
work anymore with the new hot-pluggable spapr-cpu-core types.
Since libvirt already heavily depends on the old behavior, this
is quite a severe regression in the QEMU parameter interface.
Let's fix it by supporting a CPU family type for the spapr-cpu-core
on KVM, too.
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1363812
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The code for registering the sPAPR CPU host core type has been
added inbetween the generic CPU host core type and the generic
CPU family type. That way the instance_init and the class_init
information got lost when registering the generic CPU family
type. Fix it by moving the generic family registration before
the spapr cpu core registration code.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Hard-coding the CPU alias names in the spapr_cores[] array has
two big disadvantages:
1) We register a real type with the CPU alias name in
spapr_cpu_core_register_types() - this prevents us from registering
a CPU family name in kvm_ppc_register_host_cpu_type() with the same
name (as we do it for the non-hotpluggable CPU types).
2) It's quite cumbersome to maintain the aliases here in sync with the
ppc_cpu_aliases list from target-ppc/cpu-models.c.
So let's simply add proper alias lookup to the spapr cpu core code,
too (by checking whether the given model can be used directly, and
if not by trying to look up the given model as an alias name instead).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We will need this function to look up the aliases in the
spapr-cpu-core code, too.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The sPAPR CPU core typename is already available in the upper
block. Let's use it and move the check upward also.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If we don't provide the page size in target-ppc:cpu_get_dump_info(),
the default one (TARGET_PAGE_SIZE, 4KB) is used to create
the compressed dump. It works fine with Macintosh, but not with
pseries as the kernel default page size is 64KB.
Without this patch, if we generate a compressed dump in the QEMU monitor:
(qemu) dump-guest-memory -z qemu.dump
This dump cannot be read by crash:
# crash vmlinux qemu.dump
...
WARNING: cannot translate vmemmap kernel virtual addresses:
commands requiring page structure contents will fail
...
Page_size is used to determine the dumpfile's block size. The
block size needs to be at least the page size, but a multiple of page
size works fine too. For PPC64, linux supports either 4KB or 64KB software
page size. So we define the page_size to 64KB.
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(commit 80dcfb8532)
Upon migration, the code use a timer based on vm_clock for 1ns
in the future from post_load to do the event send in case host_connected
differs between migration source and target.
However, it's not guaranteed that the apic is ready to inject irqs into
the guest, and the irq line remained high, resulting in any future interrupts
going unnoticed by the guest as well.
That's because 1) the migration coroutine is not blocked when it get EAGAIN
while reading QEMUFile. 2) The vm_clock is enabled default currently, it doesn't
rely on the calling of vm_start(), that means vm_clock timers can run before
VCPUs are running.
So, let's set the vm_clock disabled default, keep the initial intention of
design for vm_clock timers.
Meanwhile, change the test-aio usecase, using QEMU_CLOCK_REALTIME instead of
QEMU_CLOCK_VIRTUAL as the block code does.
CC: Paolo Bonzini <pbonzini@redhat.com>
CC: Dr. David Alan Gilbert <dgilbert@redhat.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Gonglei <arei.gonglei@huawei.com>
Message-Id: <1470728955-90600-1-git-send-email-arei.gonglei@huawei.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Linux uses tabs for indentation and checkpatch always complained about
automatically imported headers. update-linux-headers.sh could be modified to
expand tabs, but there is no real reason to complain about any ugly code in
Linux headers, so skip all hunk-related checks.
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
It seems like there's no good reason for the compiler to exploit the
undefinedness of left shifts. GCC explicitly documents that they do not
use at all this possibility and, while they also say this is subject
to change, they have been saying this for 10 years (since the wording
appeared in the GCC 4.0 manual).
Disable these warnings by passing in -Wno-shift-negative-value.
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Markus Armbruster <armbru@redhat.com>
Cc: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[pranith: forward-port part of patch to 2.7]
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
With the latest clang, we have the following warning:
/home/pranith/devops/code/qemu/include/qemu/seqlock.h:62:21: warning: passing 'typeof (*&sl->sequence) *' (aka 'const unsigned int *') to parameter of type 'unsigned int *' discards qualifiers [-Wincompatible-pointer-types-discards-qualifiers]
return unlikely(atomic_read(&sl->sequence) != start);
^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/pranith/devops/code/qemu/include/qemu/atomic.h:58:25: note: expanded from macro 'atomic_read'
__atomic_load(ptr, &_val, __ATOMIC_RELAXED); \
^~~~~
Stripping const is a bit tricky due to promotions, but it is doable
with either C11 _Generic or GCC extensions. Use the latter.
Reported-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[pranith: Add conversion for bool type]
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Two fixes are needed. First, mingw does not have -D_FORTIFY_SOURCE,
hence --enable-debug disables optimization. This is not acceptable
for ROMs, which should override CFLAGS to force inclusion of -O2.
Second, PE stores global constructors and destructors using the
following linker script snippet:
___CTOR_LIST__ = .; __CTOR_LIST__ = . ;
LONG (-1);*(.ctors); *(.ctor); *(SORT(.ctors.*)); LONG (0);
___DTOR_LIST__ = .; __DTOR_LIST__ = . ;
LONG (-1); *(.dtors); *(.dtor); *(SORT(.dtors.*)); LONG (0);
The LONG directives cause the .img files to be 16 bytes too large;
the recently added check to signrom.py catches this. To fix this,
replace -T and -e options with a linker script.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When calling make with a CFLAGS=.. argument, the -g/-O filter is not
applied, which may result with build failure with ASAN for example. It
could be solved with an 'override' directive on CFLAGS, but that would
actually prevent setting different CFLAGS manually.
Instead, filter the CFLAGS argument from the top-level Makefile (so
you could still call make with a different CFLAGS argument on a
rom/Makefile manually)
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20160805082421.21994-2-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Recent GCC compiles linuxboot_dma.c to 921 bytes, while CentOS 6 needs
1029 and clang needs 1527. Because the size of the ROM, rounded to the
next 512 bytes, must match, this causes the API to break between a <1K
ROM and one that is bigger.
We want to make the ROM 1.5 KB in size, but it's better to make clang
produce leaner ROMs, because currently it is worryingly close to the limit.
To fix this prevent clang's happy inlining (which -Os cannot prevent).
This only requires adding a noinline attribute.
Second, the patch makes sure that the ROM has enough padding to prevent
ABI breakage on different compilers. The size is now hardcoded in the file
that is passed to signrom.py, as was the case before commit 6f71b77
("scripts/signrom.py: Allow option ROM checksum script to write the size
header.", 2016-05-23); signrom.py however will still pad the input to
the requested size. This ensures that the padding goes beyond the
next multiple of 512 if necessary, and also avoids the need for
-fno-toplevel-reorder which clang doesn't support. signrom.py can then
error out if the requested size is too small for the actual size of the
compiled ROM.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
# gpg: Signature made Tue 09 Aug 2016 08:28:39 BST
# gpg: using RSA key 0xEF04965B398D6211
# gpg: Good signature from "Jason Wang (Jason Wang on RedHat) <jasowang@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 215D 46F4 8246 689E C77F 3562 EF04 965B 398D 6211
* remotes/jasowang/tags/net-pull-request:
hw/net: Fix a heap overflow in xlnx.xps-ethernetlite
net: vmxnet3: check for device_active before write
net: check fragment length during fragmentation
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The .receive callback of xlnx.xps-ethernetlite doesn't check the length
of data before calling memcpy. As a result, the NetClientState object in
heap will be overflowed. All versions of qemu with xlnx.xps-ethernetlite
will be affected.
Reported-by: chaojianhu <chaojianhu@hotmail.com>
Signed-off-by: chaojianhu <chaojianhu@hotmail.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Vmxnet3 device emulator does not check if the device is active,
before using it for write. It leads to a use after free issue,
if the vmxnet3_io_bar0_write routine is called after the device is
deactivated. Add check to avoid it.
Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Acked-by: Dmitry Fleytman <dmitry@daynix.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
Network transport abstraction layer supports packet fragmentation.
While fragmenting a packet, it checks for more fragments from
packet length and current fragment length. It is susceptible
to an infinite loop, if the current fragment length is zero.
Add check to avoid it.
Reported-by: Li Qiang <liqiang6-s@360.cn>
Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
Reviewed-by: Dmitry Fleytman <dmitry@daynix.com>
CC: qemu-stable@nongnu.org
Signed-off-by: Jason Wang <jasowang@redhat.com>
2016-08-09 11:45:30 +08:00
322 changed files with 13916 additions and 7062 deletions
@@ -276,7 +277,7 @@ int spapr_vio_send_crq(VIOsPAPRDevice *dev, uint8_t *crq)
uint8_tbyte;
if(!dev->crq.qsize){
fprintf(stderr,"spapr_vio_send_creq on uninitialized queue\n");
error_report("spapr_vio_send_creq on uninitialized queue");
return-1;
}
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.