Compare commits

...

30 Commits

Author SHA1 Message Date
Michael Roth
04024dea26 update VERSION for v1.3.1
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-28 10:38:28 -06:00
Markus Armbruster
1bd4397e8d qxl: Fix SPICE_RING_PROD_ITEM(), SPICE_RING_CONS_ITEM() sanity check
The pointer arithmetic there is safe, but ugly.  Coverity grouses
about it.  However, the actual comparison is off by one: <= end
instead of < end.  Fix by rewriting the check in a cleaner way.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit bc5f92e5db)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:44:31 -06:00
Sander Eikelenboom
e76672424d Fix compile errors when enabling Xen debug logging.
Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit f1b8caf1d9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:08:52 -06:00
Stefano Stabellini
df50a7e0cb xen: fix trivial PCI passthrough MSI-X bug
We are currently passing entry->data as address parameter. Pass
entry->addr instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
Xen-devel: http://marc.info/?l=xen-devel&m=135515462613715
(cherry picked from commit 044b99c655)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:07:21 -06:00
Roger Pau Monne
90c96d33c4 xen_disk: fix memory leak
On ioreq_release the full ioreq was memset to 0, loosing all the data
and memory allocations inside the QEMUIOVector, which leads to a
memory leak. Create a new function to specifically reset ioreq.

Reported-by: Maik Wessler <maik.wessler@yahoo.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 282c6a2f29)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:06:50 -06:00
Peter Maydell
4ee28799d4 tcg/target-arm: Add missing parens to assertions
Silence a (legitimate) complaint about missing parentheses:

tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_ld’:
tcg/arm/tcg-target.c:1148:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]
tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_st’:
tcg/arm/tcg-target.c:1357:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]

which meant that we would mistakenly always assert if running
a QEMU built with debug enabled on ARM.

Signed-off-by: Peter Maydell <peter.maydelL@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 5256a7208a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:52:23 -06:00
Kevin Wolf
563068a8b2 win32-aio: Fix memory leak
The buffer is allocated for both reads and writes, and obviously it
should be freed even if an error occurs.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e8bccad5ac)

Conflicts:

	block/win32-aio.c

*addressed conflict due to buggy g_free() still in use instead of
qemu_vfree() as it is upstream (via commit 7479acdb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:43:10 -06:00
Kevin Wolf
cdb483457c win32-aio: Fix vectored reads
Copying data in the right direction really helps a lot!

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit bcbbd234d4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:24:00 -06:00
Kevin Wolf
9d173df553 aio: Fix return value of aio_poll()
aio_poll() must return true if any work is still pending, even if it
didn't make progress, so that bdrv_drain_all() doesn't stop waiting too
early. The possibility of stopping early occasionally lead to a failed
assertion in bdrv_drain_all(), when some in-flight request was missed
and the function didn't really drain all requests.

In order to make that change, the return value as specified in the
function comment must change for blocking = false; fortunately, the
return value of blocking = false callers is only used in test cases, so
this change shouldn't cause any trouble.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2ea9b58f0b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:23:52 -06:00
Paolo Bonzini
204dd38c2d raw-posix: fix bdrv_aio_ioctl
When the raw-posix aio=thread code was moved from posix-aio-compat.c
to block/raw-posix.c, there was an unintended change to the ioctl code.
The code used to return the ioctl command, which posix_aio_read()
would later morph into a zero.  This hack is not necessary anymore,
and in fact breaks scsi-generic (which expects a zero return code).
Remove it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b608c8dc02)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:45 -06:00
Alex Williamson
86bab45948 vfio-pci: Loosen sanity checks to allow future features
VFIO_PCI_NUM_REGIONS and VFIO_PCI_NUM_IRQS should never have been
used in this manner as it locks a specific kernel implementation.
Future features may introduce new regions or interrupt entries
(VGA may add legacy ranges, AER might add an IRQ for error
signalling).  Fix this before it gets us into trouble.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 8fc94e5a80)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:45 -06:00
Alex Williamson
006c747440 pci-assign: Enable MSIX on device to match guest
When a guest enables MSIX on a device we evaluate the MSIX vector
table, typically find no unmasked vectors and don't switch the device
to MSIX mode.  This generally works fine and the device will be
switched once the guest enables and therefore unmasks a vector.
Unfortunately some drivers enable MSIX, then use interfaces to send
commands between VF & PF or PF & firmware that act based on the host
state of the device.  These therefore may break when MSIX is managed
lazily.  This change re-enables the previous test used to enable MSIX
(see qemu-kvm a6b402c9), which basically guesses whether a vector
will be used based on the data field of the vector table.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit feb9a2ab4b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:44 -06:00
Alex Williamson
f042cca009 vfio-pci: Make host MSI-X enable track guest
Guests typically enable MSI-X with all of the vectors in the MSI-X
vector table masked.  Only when the vector is enabled does the vector
get unmasked, resulting in a vector_use callback.  These two points,
enable and unmask, correspond to pci_enable_msix() and request_irq()
for Linux guests.  Some drivers rely on VF/PF or PF/fw communication
channels that expect the physical state of the device to match the
guest visible state of the device.  They don't appreciate lazily
enabling MSI-X on the physical device.

To solve this, enable MSI-X with a single vector when the MSI-X
capability is enabled and immediate disable the vector.  This leaves
the physical device in exactly the same state between host and guest.
Furthermore, the brief gap where we enable vector 0, it fires into
userspace, not KVM, so the guest doesn't get spurious interrupts.
Ideally we could call VFIO_DEVICE_SET_IRQS with the right parameters
to enable MSI-X with zero vectors, but this will currently return an
error as the Linux MSI-X interfaces do not allow it.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit b0223e29af)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:44 -06:00
Max Filippov
1205b8080f target-xtensa: fix search_pc for the last TB opcode
Zero out tcg_ctx.gen_opc_instr_start for instructions representing the
last guest opcode in the TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 36f25d2537)

*modified to use older global version of gen_opc_instr_start

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:04:53 -06:00
Paolo Bonzini
ff0c079c14 buffered_file: do not send more than s->bytes_xfer bytes per tick
Sending more was possible if the buffer was large.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit bde54c08b4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:38:00 -06:00
Paolo Bonzini
d745511fc9 migration: fix migration_bitmap leak
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 244eaa7514)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:37:38 -06:00
Michael Contreras
5afd0ecaa6 e1000: Discard oversized packets based on SBP|LPE
Discard packets longer than 16384 when !SBP to match the hardware behavior.

Signed-off-by: Michael Contreras <michael@inetric.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2c0331f4f7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:37:17 -06:00
Uri Lublin
c4cd5b0f6d qxl+vnc: register a vm state change handler for dummy spice_server
When qxl + vnc are used, a dummy spice_server is initialized.
The spice_server has to be told when the VM runstate changes,
which is what this patch does.

Without it, from qxl_send_events(), the following error message is shown:
  qxl_send_events: spice-server bug: guest stopped, ignoring

Cc: qemu-stable@nongnu.org
Signed-off-by: Uri Lublin <uril@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 938b8a36b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:36:22 -06:00
Gerd Hoffmann
7ca2496588 qxl: save qemu_create_displaysurface_from result
Spotted by Coverity.

https://bugzilla.redhat.com/show_bug.cgi?id=885644

Cc: qemu-stable@nongnu.org
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 2f464b5a32)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:35:40 -06:00
Max Filippov
bfae9374f1 target-xtensa: fix ITLB/DTLB page protection flags
With MMU option xtensa architecture has two TLBs: ITLB and DTLB. ITLB is
only used for code access, DTLB is only for data. However TLB entries in
both TLBs have attribute field controlling write and exec access. These
bits need to be properly masked off depending on TLB type before being
used as tlb_set_page prot argument. Otherwise the following happens:

(1) ITLB entry for some PFN gets invalidated
(2) DTLB entry for the same PFN gets updated, attributes allow code
    execution
(3) code at the page with that PFN is executed (possible due to step 2),
    entry for the TB is written into the jump cache
(4) QEMU TLB entry for the PFN gets replaced with an entry for some
    other PFN
(5) code in the TB from step 3 is executed (possible due to jump cache)
    and it accesses data, for which there's no DTLB entry, causing DTLB
    miss exception
(6) re-translation of the TB from step 5 is attempted, but there's no
    QEMU TLB entry nor xtensa ITLB entry for that PFN, which causes ITLB
    miss exception at the TB start address
(7) ITLB miss exception is handled by the guest, but execution is
    resumed from the beginning of the faulting TB (the point where ITLB
    miss occured), not from the point where DTLB miss occured, which is
    wrong.

With that fix the above scenario causes ITLB miss exception (that used
to be step 7) at step 3, right at the beginning of the TB.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 659f807c0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:34:54 -06:00
Gerd Hoffmann
b68c48ff01 pixman: fix vnc tight png/jpeg support
This patch adds an x argument to qemu_pixman_linebuf_fill so it can
also be used to convert a partial scanline.  Then fix tight + png/jpeg
encoding by passing in the x+y offset, so the data is read from the
correct screen location instead of the upper left corner.

Cc: 1087974@bugs.launchpad.net
Cc: qemu-stable@nongnu.org
Reported-by: Tim Hardeneck <thardeck@suse.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit bc210eb163)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:33:57 -06:00
Gerd Hoffmann
36fd8179b6 Update seabios to a810e4e72a0d42c7bc04eda57382f8e019add901
git shortlog:

Kevin O'Connor (6):
      floppy: Minor - reduce handle_0e code size when CONFIG_FLOPPY is disabled.
      vga: Minor comment spelling fix.
      Don't recursively evaluate CFLAGS variables.
      Don't use gcc's -combine option.
      Add compile checking phase to build.
      acpi: Use prt_slot() macro to describe irq pins of first PCI device.

Laszlo Ersek (1):
      maininit(): print machine UUID under seabios version message

Paolo Bonzini (1):
      acpi: reintroduce LNKS

Paolo's patch fixes the FreeBSD boot failure.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 15faf946f7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 10:47:39 -06:00
Gerd Hoffmann
0bc5f4ad63 seabios: update to e8a76b0f225bba5ba9d63ab227e0a37b3beb1059
This patch updates seabios to latest git master.  Changes:

  (1) q35 patches merged.
  (2) some acpi cleanups.
  (3) fixes irq 8 conflict.

(3) makes this a candidate for the stable branch

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit ff1562908d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 10:46:57 -06:00
Alex Williamson
37e1428cc7 vfio-pci: Don't use kvm_irqchip_in_kernel
kvm_irqchip_in_kernel() has an architecture specific meaning, so
we shouldn't be using it to determine whether to enabled KVM INTx
bypass.  kvm_irqfds_enabled() seems most appropriate.  Also use this
to protect our other call to kvm_check_extension() as that explodes
when KVM isn't enabled.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit d281084d3e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:45:06 -06:00
Petar Jovanovic
518799a3e7 target-mips: Fix incorrect shift for SHILO and SHILOV
helper_shilo has not been shifting an accumulator value correctly for negative
values in 'shift' field. Minor optimization for shift=0 case.
This change also adds tests that will trigger issue and check for regressions.

Signed-off-by: Petar Jovanovic <petarj@mips.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 19e6c50d2d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:43:29 -06:00
Petar Jovanovic
16c5fe49de target-mips: Fix incorrect code and test for INSV
Content of register rs should be shifted for pos before applying a mask.
This change contains both fix for the instruction and to the existing test.

Signed-off-by: Petar Jovanovic <petarj@mips.com>
Reviewed-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 34f5606ee1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:42:38 -06:00
David Gibson
f1a2195ec3 migration: Fix madvise breakage if host and guest have different page sizes
madvise(DONTNEED) will throw away the contents of the whole page at the
given address, even if the given length is less than the page size.  One
can argue about whether that's the correct behaviour, but that's what it's
done for a long time in Linux at least.

That means that the madvise() in ram_load(), on a setup where
TARGET_PAGE_SIZE is smaller than the host page size, can throw away data
in guest pages adjacent to the one it's actually processing right now,
leading to guest memory corruption on an incoming migration.

This patch therefore, disables the madvise() if the host page size is
larger than TARGET_PAGE_SIZE.  This means we don't get the benefits of that
madvise() in this case, but a more complete fix is more difficult to
accomplish.  This at least fixes the guest memory corruption.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 45e6cee42b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:41:11 -06:00
David Gibson
3b4fc1f9d2 Fix off-by-1 error in RAM migration code
The code for migrating (or savevm-ing) memory pages starts off by creating
a dirty bitmap and filling it with 1s.  Except, actually, because bit
addresses are 0-based it fills every bit except bit 0 with 1s and puts an
extra 1 beyond the end of the bitmap, potentially corrupting unrelated
memory.  Oops.  This patch fixes it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7ec81e56ed)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:40:38 -06:00
Brad Smith
d67d95f24e Disable semaphores fallback code for OpenBSD
Disable the semaphores fallback code for OpenBSD as modern OpenBSD
releases now have sem_timedwait().

Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 927fa909d5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:36:36 -06:00
Brad Smith
0a7ad69a0f Fix semaphores fallback code
As reported in bug 1087114 the semaphores fallback code is broken which
results in QEMU crashing and making QEMU unusable.

This patch is from Paolo.

This needs to be back ported to the 1.3 stable tree as well.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit a795ef8dcb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:36:28 -06:00
37 changed files with 196 additions and 97 deletions

View File

@@ -1 +1 @@
1.3.0
1.3.1

View File

@@ -264,5 +264,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
}
}
return progress;
assert(progress || busy);
return true;
}

View File

@@ -214,5 +214,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
events[ret - WAIT_OBJECT_0] = events[--count];
}
return progress;
assert(progress || busy);
return true;
}

View File

@@ -535,9 +535,13 @@ static void sort_ram_list(void)
static void migration_end(void)
{
memory_global_dirty_log_stop();
if (migration_bitmap) {
memory_global_dirty_log_stop();
g_free(migration_bitmap);
migration_bitmap = NULL;
}
if (migrate_use_xbzrle()) {
if (XBZRLE.cache) {
cache_fini(XBZRLE.cache);
g_free(XBZRLE.cache);
g_free(XBZRLE.encoded_buf);
@@ -568,7 +572,7 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
migration_bitmap = bitmap_new(ram_pages);
bitmap_set(migration_bitmap, 1, ram_pages);
bitmap_set(migration_bitmap, 0, ram_pages);
migration_dirty_pages = ram_pages;
bytes_transferred = 0;
@@ -689,13 +693,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
}
bytes_transferred += bytes_sent;
}
memory_global_dirty_log_stop();
migration_end();
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
g_free(migration_bitmap);
migration_bitmap = NULL;
return 0;
}
@@ -840,7 +841,8 @@ static int ram_load(QEMUFile *f, void *opaque, int version_id)
memset(host, ch, TARGET_PAGE_SIZE);
#ifndef _WIN32
if (ch == 0 &&
(!kvm_enabled() || kvm_has_sync_mmu())) {
(!kvm_enabled() || kvm_has_sync_mmu()) &&
getpagesize() <= TARGET_PAGE_SIZE) {
qemu_madvise(host, TARGET_PAGE_SIZE, QEMU_MADV_DONTNEED);
}
#endif

View File

@@ -456,15 +456,7 @@ static ssize_t handle_aiocb_ioctl(RawPosixAIOData *aiocb)
return -errno;
}
/*
* This looks weird, but the aio code only considers a request
* successful if it has written the full number of bytes.
*
* Now we overload aio_nbytes as aio_ioctl_cmd for the ioctl command,
* so in fact we return the ioctl command here to make posix_aio_read()
* happy..
*/
return aiocb->aio_nbytes;
return 0;
}
static ssize_t handle_aiocb_flush(RawPosixAIOData *aiocb)

View File

@@ -84,11 +84,11 @@ static void win32_aio_process_completion(QEMUWin32AIOState *s,
int i;
for (i = 0; i < qiov->niov; ++i) {
memcpy(p, qiov->iov[i].iov_base, qiov->iov[i].iov_len);
memcpy(qiov->iov[i].iov_base, p, qiov->iov[i].iov_len);
p += qiov->iov[i].iov_len;
}
g_free(waiocb->buf);
}
qemu_vfree(waiocb->buf);
}

View File

@@ -66,9 +66,9 @@ static ssize_t buffered_flush(QEMUFileBuffered *s)
DPRINTF("flushing %zu byte(s) of data\n", s->buffer_size);
while (s->bytes_xfer < s->xfer_limit && offset < s->buffer_size) {
size_t to_send = MIN(s->buffer_size - offset, s->xfer_limit - s->bytes_xfer);
ret = migrate_fd_put_buffer(s->migration_state, s->buffer + offset,
s->buffer_size - offset);
to_send);
if (ret == -EAGAIN) {
DPRINTF("backend not ready, freezing\n");
ret = 0;

View File

@@ -61,6 +61,8 @@ static int debugflags = DBGBIT(TXERR) | DBGBIT(GENERAL);
/* this is the size past which hardware will drop packets when setting LPE=0 */
#define MAXIMUM_ETHERNET_VLAN_SIZE 1522
/* this is the size past which hardware will drop packets when setting LPE=1 */
#define MAXIMUM_ETHERNET_LPE_SIZE 16384
/*
* HW models:
@@ -809,8 +811,9 @@ e1000_receive(NetClientState *nc, const uint8_t *buf, size_t size)
}
/* Discard oversized packets if !LPE and !SBP. */
if (size > MAXIMUM_ETHERNET_VLAN_SIZE
&& !(s->mac_reg[RCTL] & E1000_RCTL_LPE)
if ((size > MAXIMUM_ETHERNET_LPE_SIZE ||
(size > MAXIMUM_ETHERNET_VLAN_SIZE
&& !(s->mac_reg[RCTL] & E1000_RCTL_LPE)))
&& !(s->mac_reg[RCTL] & E1000_RCTL_SBP)) {
return size;
}

View File

@@ -1025,6 +1025,19 @@ static bool assigned_dev_msix_masked(MSIXTableEntry *entry)
return (entry->ctrl & cpu_to_le32(0x1)) != 0;
}
/*
* When MSI-X is first enabled the vector table typically has all the
* vectors masked, so we can't use that as the obvious test to figure out
* how many vectors to initially enable. Instead we look at the data field
* because this is what worked for pci-assign for a long time. This makes
* sure the physical MSI-X state tracks the guest's view, which is important
* for some VF/PF and PF/fw communication channels.
*/
static bool assigned_dev_msix_skipped(MSIXTableEntry *entry)
{
return !entry->data;
}
static int assigned_dev_update_msix_mmio(PCIDevice *pci_dev)
{
AssignedDevice *adev = DO_UPCAST(AssignedDevice, dev, pci_dev);
@@ -1035,7 +1048,7 @@ static int assigned_dev_update_msix_mmio(PCIDevice *pci_dev)
/* Get the usable entry number for allocating */
for (i = 0; i < adev->msix_max; i++, entry++) {
if (assigned_dev_msix_masked(entry)) {
if (assigned_dev_msix_skipped(entry)) {
continue;
}
entries_nr++;
@@ -1064,7 +1077,7 @@ static int assigned_dev_update_msix_mmio(PCIDevice *pci_dev)
for (i = 0; i < adev->msix_max; i++, entry++) {
adev->msi_virq[i] = -1;
if (assigned_dev_msix_masked(entry)) {
if (assigned_dev_msix_skipped(entry)) {
continue;
}

View File

@@ -113,11 +113,12 @@ static void qxl_render_update_area_unlocked(PCIQXLDevice *qxl)
qxl->guest_primary.bits_pp);
if (qxl->guest_primary.qxl_stride > 0) {
qemu_free_displaysurface(vga->ds);
qemu_create_displaysurface_from(qxl->guest_primary.surface.width,
qxl->guest_primary.surface.height,
qxl->guest_primary.bits_pp,
qxl->guest_primary.abs_stride,
qxl->guest_primary.data);
vga->ds->surface = qemu_create_displaysurface_from
(qxl->guest_primary.surface.width,
qxl->guest_primary.surface.height,
qxl->guest_primary.bits_pp,
qxl->guest_primary.abs_stride,
qxl->guest_primary.data);
} else {
qemu_resize_displaysurface(vga->ds,
qxl->guest_primary.surface.width,

View File

@@ -37,33 +37,25 @@
*/
#undef SPICE_RING_PROD_ITEM
#define SPICE_RING_PROD_ITEM(qxl, r, ret) { \
typeof(r) start = r; \
typeof(r) end = r + 1; \
uint32_t prod = (r)->prod & SPICE_RING_INDEX_MASK(r); \
typeof(&(r)->items[prod]) m_item = &(r)->items[prod]; \
if (!((uint8_t*)m_item >= (uint8_t*)(start) && (uint8_t*)(m_item + 1) <= (uint8_t*)(end))) { \
if (prod >= ARRAY_SIZE((r)->items)) { \
qxl_set_guest_bug(qxl, "SPICE_RING_PROD_ITEM indices mismatch " \
"! %p <= %p < %p", (uint8_t *)start, \
(uint8_t *)m_item, (uint8_t *)end); \
"%u >= %zu", prod, ARRAY_SIZE((r)->items)); \
ret = NULL; \
} else { \
ret = &m_item->el; \
ret = &(r)->items[prod].el; \
} \
}
#undef SPICE_RING_CONS_ITEM
#define SPICE_RING_CONS_ITEM(qxl, r, ret) { \
typeof(r) start = r; \
typeof(r) end = r + 1; \
uint32_t cons = (r)->cons & SPICE_RING_INDEX_MASK(r); \
typeof(&(r)->items[cons]) m_item = &(r)->items[cons]; \
if (!((uint8_t*)m_item >= (uint8_t*)(start) && (uint8_t*)(m_item + 1) <= (uint8_t*)(end))) { \
if (cons >= ARRAY_SIZE((r)->items)) { \
qxl_set_guest_bug(qxl, "SPICE_RING_CONS_ITEM indices mismatch " \
"! %p <= %p < %p", (uint8_t *)start, \
(uint8_t *)m_item, (uint8_t *)end); \
"%u >= %zu", cons, ARRAY_SIZE((r)->items)); \
ret = NULL; \
} else { \
ret = &m_item->el; \
ret = &(r)->items[cons].el; \
} \
}

View File

@@ -275,7 +275,7 @@ static void vfio_enable_intx_kvm(VFIODevice *vdev)
int ret, argsz;
int32_t *pfd;
if (!kvm_irqchip_in_kernel() ||
if (!kvm_irqfds_enabled() ||
vdev->intx.route.mode != PCI_INTX_ENABLED ||
!kvm_check_extension(kvm_state, KVM_CAP_IRQFD_RESAMPLE)) {
return;
@@ -438,7 +438,8 @@ static int vfio_enable_intx(VFIODevice *vdev)
* Only conditional to avoid generating error messages on platforms
* where we won't actually use the result anyway.
*/
if (kvm_check_extension(kvm_state, KVM_CAP_IRQFD_RESAMPLE)) {
if (kvm_irqfds_enabled() &&
kvm_check_extension(kvm_state, KVM_CAP_IRQFD_RESAMPLE)) {
vdev->intx.route = pci_device_route_intx_to_irq(&vdev->pdev,
vdev->intx.pin);
}
@@ -561,8 +562,8 @@ static int vfio_enable_vectors(VFIODevice *vdev, bool msix)
return ret;
}
static int vfio_msix_vector_use(PCIDevice *pdev,
unsigned int nr, MSIMessage msg)
static int vfio_msix_vector_do_use(PCIDevice *pdev, unsigned int nr,
MSIMessage *msg, IOHandler *handler)
{
VFIODevice *vdev = DO_UPCAST(VFIODevice, pdev, pdev);
VFIOMSIVector *vector;
@@ -586,7 +587,7 @@ static int vfio_msix_vector_use(PCIDevice *pdev,
* Attempt to enable route through KVM irqchip,
* default to userspace handling if unavailable.
*/
vector->virq = kvm_irqchip_add_msi_route(kvm_state, msg);
vector->virq = msg ? kvm_irqchip_add_msi_route(kvm_state, *msg) : -1;
if (vector->virq < 0 ||
kvm_irqchip_add_irqfd_notifier(kvm_state, &vector->interrupt,
vector->virq) < 0) {
@@ -595,7 +596,7 @@ static int vfio_msix_vector_use(PCIDevice *pdev,
vector->virq = -1;
}
qemu_set_fd_handler(event_notifier_get_fd(&vector->interrupt),
vfio_msi_interrupt, NULL, vector);
handler, NULL, vector);
}
/*
@@ -638,6 +639,12 @@ static int vfio_msix_vector_use(PCIDevice *pdev,
return 0;
}
static int vfio_msix_vector_use(PCIDevice *pdev,
unsigned int nr, MSIMessage msg)
{
return vfio_msix_vector_do_use(pdev, nr, &msg, vfio_msi_interrupt);
}
static void vfio_msix_vector_release(PCIDevice *pdev, unsigned int nr)
{
VFIODevice *vdev = DO_UPCAST(VFIODevice, pdev, pdev);
@@ -696,6 +703,22 @@ static void vfio_enable_msix(VFIODevice *vdev)
vdev->interrupt = VFIO_INT_MSIX;
/*
* Some communication channels between VF & PF or PF & fw rely on the
* physical state of the device and expect that enabling MSI-X from the
* guest enables the same on the host. When our guest is Linux, the
* guest driver call to pci_enable_msix() sets the enabling bit in the
* MSI-X capability, but leaves the vector table masked. We therefore
* can't rely on a vector_use callback (from request_irq() in the guest)
* to switch the physical device into MSI-X mode because that may come a
* long time after pci_enable_msix(). This code enables vector 0 with
* triggering to userspace, then immediately release the vector, leaving
* the physical device with no vectors enabled, but MSI-X enabled, just
* like the guest view.
*/
vfio_msix_vector_do_use(&vdev->pdev, 0, NULL, NULL);
vfio_msix_vector_release(&vdev->pdev, 0);
if (msix_set_vector_notifiers(&vdev->pdev, vfio_msix_vector_use,
vfio_msix_vector_release)) {
error_report("vfio: msix_set_vector_notifiers failed\n");
@@ -1814,13 +1837,13 @@ static int vfio_get_device(VFIOGroup *group, const char *name, VFIODevice *vdev)
error_report("Warning, device %s does not support reset\n", name);
}
if (dev_info.num_regions != VFIO_PCI_NUM_REGIONS) {
if (dev_info.num_regions < VFIO_PCI_CONFIG_REGION_INDEX + 1) {
error_report("vfio: unexpected number of io regions %u\n",
dev_info.num_regions);
goto error;
}
if (dev_info.num_irqs != VFIO_PCI_NUM_IRQS) {
if (dev_info.num_irqs < VFIO_PCI_MSIX_IRQ_INDEX + 1) {
error_report("vfio: unexpected number of irqs %u\n", dev_info.num_irqs);
goto error;
}

View File

@@ -2413,7 +2413,7 @@ void ppm_save(const char *filename, struct DisplaySurface *ds, Error **errp)
}
linebuf = qemu_pixman_linebuf_create(PIXMAN_BE_r8g8b8, width);
for (y = 0; y < height; y++) {
qemu_pixman_linebuf_fill(linebuf, ds->image, width, y);
qemu_pixman_linebuf_fill(linebuf, ds->image, width, 0, y);
clearerr(f);
ret = fwrite(pixman_image_get_data(linebuf), 1,
pixman_image_get_stride(linebuf), f);

View File

@@ -113,6 +113,31 @@ struct XenBlkDev {
/* ------------------------------------------------------------- */
static void ioreq_reset(struct ioreq *ioreq)
{
memset(&ioreq->req, 0, sizeof(ioreq->req));
ioreq->status = 0;
ioreq->start = 0;
ioreq->presync = 0;
ioreq->postsync = 0;
ioreq->mapped = 0;
memset(ioreq->domids, 0, sizeof(ioreq->domids));
memset(ioreq->refs, 0, sizeof(ioreq->refs));
ioreq->prot = 0;
memset(ioreq->page, 0, sizeof(ioreq->page));
ioreq->pages = NULL;
ioreq->aio_inflight = 0;
ioreq->aio_errors = 0;
ioreq->blkdev = NULL;
memset(&ioreq->list, 0, sizeof(ioreq->list));
memset(&ioreq->acct, 0, sizeof(ioreq->acct));
qemu_iovec_reset(&ioreq->v);
}
static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
{
struct ioreq *ioreq = NULL;
@@ -130,7 +155,6 @@ static struct ioreq *ioreq_start(struct XenBlkDev *blkdev)
/* get one from freelist */
ioreq = QLIST_FIRST(&blkdev->freelist);
QLIST_REMOVE(ioreq, list);
qemu_iovec_reset(&ioreq->v);
}
QLIST_INSERT_HEAD(&blkdev->inflight, ioreq, list);
blkdev->requests_inflight++;
@@ -154,7 +178,7 @@ static void ioreq_release(struct ioreq *ioreq, bool finish)
struct XenBlkDev *blkdev = ioreq->blkdev;
QLIST_REMOVE(ioreq, list);
memset(ioreq, 0, sizeof(*ioreq));
ioreq_reset(ioreq);
ioreq->blkdev = blkdev;
QLIST_INSERT_HEAD(&blkdev->freelist, ioreq, list);
if (finish) {

View File

@@ -671,7 +671,8 @@ static int xen_pt_initfn(PCIDevice *d)
s->is_virtfn = s->real_device.is_virtfn;
if (s->is_virtfn) {
XEN_PT_LOG(d, "%04x:%02x:%02x.%d is a SR-IOV Virtual Function\n",
s->real_device.domain, bus, slot, func);
s->real_device.domain, s->real_device.bus,
s->real_device.dev, s->real_device.func);
}
/* Initialize virtualized PCI configuration (Extended 256 Bytes) */
@@ -752,7 +753,7 @@ out:
memory_listener_register(&s->memory_listener, &address_space_memory);
memory_listener_register(&s->io_listener, &address_space_io);
XEN_PT_LOG(d, "Real physical device %02x:%02x.%d registered successfuly!\n",
bus, slot, func);
s->hostaddr.bus, s->hostaddr.slot, s->hostaddr.function);
return 0;
}

View File

@@ -321,7 +321,7 @@ static int xen_pt_msix_update_one(XenPCIPassthroughState *s, int entry_nr)
pirq = entry->pirq;
rc = msi_msix_setup(s, entry->data, entry->data, &pirq, true, entry_nr,
rc = msi_msix_setup(s, entry->addr, entry->data, &pirq, true, entry_nr,
entry->pirq == XEN_PT_UNASSIGNED_PIRQ);
if (rc) {
return rc;

Binary file not shown.

Binary file not shown.

BIN
pc-bios/q35-acpi-dsdt.aml Normal file

Binary file not shown.

View File

@@ -177,16 +177,14 @@ bool aio_pending(AioContext *ctx);
* aio as a result of executing I/O completion or bh callbacks.
*
* If there is no pending AIO operation or completion (bottom half),
* return false. If there are pending bottom halves, return true.
* return false. If there are pending AIO operations of bottom halves,
* return true.
*
* If there are no pending bottom halves, but there are pending AIO
* operations, it may not be possible to make any progress without
* blocking. If @blocking is true, this function will wait until one
* or more AIO events have completed, to ensure something has moved
* before returning.
*
* If @blocking is false, this function will also return false if the
* function cannot make any progress without blocking.
*/
bool aio_poll(AioContext *ctx, bool blocking);

View File

@@ -52,10 +52,10 @@ pixman_image_t *qemu_pixman_linebuf_create(pixman_format_code_t format,
}
void qemu_pixman_linebuf_fill(pixman_image_t *linebuf, pixman_image_t *fb,
int width, int y)
int width, int x, int y)
{
pixman_image_composite(PIXMAN_OP_SRC, fb, NULL, linebuf,
0, y, 0, 0, 0, 0, width, 1);
x, y, 0, 0, 0, 0, width, 1);
}
pixman_image_t *qemu_pixman_mirror_create(pixman_format_code_t format,

View File

@@ -31,7 +31,7 @@ pixman_format_code_t qemu_pixman_get_format(PixelFormat *pf);
pixman_image_t *qemu_pixman_linebuf_create(pixman_format_code_t format,
int width);
void qemu_pixman_linebuf_fill(pixman_image_t *linebuf, pixman_image_t *fb,
int width, int y);
int width, int x, int y);
pixman_image_t *qemu_pixman_mirror_create(pixman_format_code_t format,
pixman_image_t *image);
void qemu_pixman_image_unref(pixman_image_t *image);

View File

@@ -122,7 +122,7 @@ void qemu_sem_init(QemuSemaphore *sem, int init)
{
int rc;
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
rc = pthread_mutex_init(&sem->lock, NULL);
if (rc != 0) {
error_exit(rc, __func__);
@@ -147,7 +147,7 @@ void qemu_sem_destroy(QemuSemaphore *sem)
{
int rc;
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
rc = pthread_cond_destroy(&sem->cond);
if (rc < 0) {
error_exit(rc, __func__);
@@ -168,7 +168,7 @@ void qemu_sem_post(QemuSemaphore *sem)
{
int rc;
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
pthread_mutex_lock(&sem->lock);
if (sem->count == INT_MAX) {
rc = EINVAL;
@@ -206,13 +206,14 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms)
int rc;
struct timespec ts;
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
compute_abs_deadline(&ts, ms);
pthread_mutex_lock(&sem->lock);
--sem->count;
while (sem->count < 0) {
rc = pthread_cond_timedwait(&sem->cond, &sem->lock, &ts);
if (rc == ETIMEDOUT) {
++sem->count;
break;
}
if (rc != 0) {
@@ -248,7 +249,7 @@ int qemu_sem_timedwait(QemuSemaphore *sem, int ms)
void qemu_sem_wait(QemuSemaphore *sem)
{
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
pthread_mutex_lock(&sem->lock);
--sem->count;
while (sem->count < 0) {

View File

@@ -12,7 +12,7 @@ struct QemuCond {
};
struct QemuSemaphore {
#if defined(__OpenBSD__) || defined(__APPLE__) || defined(__NetBSD__)
#if defined(__APPLE__) || defined(__NetBSD__)
pthread_mutex_t lock;
pthread_cond_t cond;
int count;

View File

@@ -3152,7 +3152,7 @@ target_ulong helper_##name(CPUMIPSState *env, target_ulong rs, \
\
filter = ((int32_t)0x01 << size) - 1; \
filter = filter << pos; \
temprs = rs & filter; \
temprs = (rs << pos) & filter; \
temprt = rt & ~filter; \
temp = temprs | temprt; \
\
@@ -3814,17 +3814,18 @@ void helper_shilo(target_ulong ac, target_ulong rs, CPUMIPSState *env)
rs5_0 = rs & 0x3F;
rs5_0 = (int8_t)(rs5_0 << 2) >> 2;
rs5_0 = MIPSDSP_ABS(rs5_0);
if (unlikely(rs5_0 == 0)) {
return;
}
acc = (((uint64_t)env->active_tc.HI[ac] << 32) & MIPSDSP_LHI) |
((uint64_t)env->active_tc.LO[ac] & MIPSDSP_LLO);
if (rs5_0 == 0) {
temp = acc;
if (rs5_0 > 0) {
temp = acc >> rs5_0;
} else {
if (rs5_0 > 0) {
temp = acc >> rs5_0;
} else {
temp = acc << rs5_0;
}
temp = acc << -rs5_0;
}
env->active_tc.HI[ac] = (target_ulong)(int32_t)((temp & MIPSDSP_LHI) >> 32);

View File

@@ -486,7 +486,8 @@ static int get_physical_addr_mmu(CPUXtensaState *env, bool update_tlb,
INST_FETCH_PRIVILEGE_CAUSE;
}
*access = mmu_attr_to_access(entry->attr);
*access = mmu_attr_to_access(entry->attr) &
~(dtlb ? PAGE_EXEC : PAGE_READ | PAGE_WRITE);
if (!is_access_granted(*access, is_write)) {
return dtlb ?
(is_write ?

View File

@@ -2962,7 +2962,11 @@ static void gen_intermediate_code_internal(
gen_icount_end(tb, insn_count);
*tcg_ctx.gen_opc_ptr = INDEX_op_end;
if (!search_pc) {
if (search_pc) {
j = tcg_ctx.gen_opc_ptr - tcg_ctx.gen_opc_buf;
memset(gen_opc_instr_start + lj + 1, 0,
(j - lj) * sizeof(gen_opc_instr_start[0]));
} else {
tb->size = dc.pc - pc_start;
tb->icount = insn_count;
}

View File

@@ -1145,7 +1145,7 @@ static inline void tcg_out_qemu_ld(TCGContext *s, const TCGArg *args, int opc)
TCG_REG_R0, SHIFT_IMM_LSL(CPU_TLB_ENTRY_BITS));
/* We assume that the offset is contained within 20 bits. */
tlb_offset = offsetof(CPUArchState, tlb_table[mem_index][0].addr_read);
assert(tlb_offset & ~0xfffff == 0);
assert((tlb_offset & ~0xfffff) == 0);
if (tlb_offset > 0xfff) {
tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_R0, TCG_REG_R0,
0xa00 | (tlb_offset >> 12));
@@ -1354,7 +1354,7 @@ static inline void tcg_out_qemu_st(TCGContext *s, const TCGArg *args, int opc)
TCG_AREG0, TCG_REG_R0, SHIFT_IMM_LSL(CPU_TLB_ENTRY_BITS));
/* We assume that the offset is contained within 20 bits. */
tlb_offset = offsetof(CPUArchState, tlb_table[mem_index][0].addr_write);
assert(tlb_offset & ~0xfffff == 0);
assert((tlb_offset & ~0xfffff) == 0);
if (tlb_offset > 0xfff) {
tcg_out_dat_imm(s, COND_AL, ARITH_ADD, TCG_REG_R0, TCG_REG_R0,
0xa00 | (tlb_offset >> 12));

View File

@@ -10,7 +10,7 @@ int main()
dsp = 0x305;
rt = 0x12345678;
rs = 0x87654321;
result = 0x12345338;
result = 0x12345438;
__asm
("wrdsp %2, 0x03\n\t"
"insv %0, %1\n\t"

View File

@@ -23,5 +23,23 @@ int main()
assert(ach == resulth);
assert(acl == resultl);
ach = 0x1;
acl = 0x80000000;
resulth = 0x3;
resultl = 0x0;
__asm
("mthi %0, $ac1\n\t"
"mtlo %1, $ac1\n\t"
"shilo $ac1, -1\n\t"
"mfhi %0, $ac1\n\t"
"mflo %1, $ac1\n\t"
: "+r"(ach), "+r"(acl)
);
assert(ach == resulth);
assert(acl == resultl);
return 0;
}

View File

@@ -25,5 +25,25 @@ int main()
assert(ach == resulth);
assert(acl == resultl);
rs = 0xffffffff;
ach = 0x1;
acl = 0x80000000;
resulth = 0x3;
resultl = 0x0;
__asm
("mthi %0, $ac1\n\t"
"mtlo %1, $ac1\n\t"
"shilov $ac1, %2\n\t"
"mfhi %0, $ac1\n\t"
"mflo %1, $ac1\n\t"
: "+r"(ach), "+r"(acl)
: "r"(rs)
);
assert(ach == resulth);
assert(acl == resultl);
return 0;
}

View File

@@ -315,13 +315,13 @@ static void test_wait_event_notifier_noflush(void)
event_notifier_set(&data.e);
g_assert(aio_poll(ctx, false));
g_assert_cmpint(data.n, ==, 1);
g_assert(!aio_poll(ctx, false));
g_assert(aio_poll(ctx, false));
g_assert_cmpint(data.n, ==, 1);
event_notifier_set(&data.e);
g_assert(aio_poll(ctx, false));
g_assert_cmpint(data.n, ==, 2);
g_assert(!aio_poll(ctx, false));
g_assert(aio_poll(ctx, false));
g_assert_cmpint(data.n, ==, 2);
event_notifier_set(&dummy.e);

View File

@@ -732,6 +732,8 @@ int qemu_spice_add_interface(SpiceBaseInstance *sin)
*/
spice_server = spice_server_new();
spice_server_init(spice_server, &core_interface);
qemu_add_vm_change_state_handler(vm_change_state_handler,
&spice_server);
}
return spice_server_add_interface(spice_server, sin);

View File

@@ -1212,7 +1212,7 @@ static int send_jpeg_rect(VncState *vs, int x, int y, int w, int h, int quality)
buf = (uint8_t *)pixman_image_get_data(linebuf);
row[0] = buf;
for (dy = 0; dy < h; dy++) {
qemu_pixman_linebuf_fill(linebuf, vs->vd->server, w, dy);
qemu_pixman_linebuf_fill(linebuf, vs->vd->server, w, x, y + dy);
jpeg_write_scanlines(&cinfo, row, 1);
}
qemu_pixman_image_unref(linebuf);
@@ -1356,7 +1356,7 @@ static int send_png_rect(VncState *vs, int x, int y, int w, int h,
if (color_type == PNG_COLOR_TYPE_PALETTE) {
memcpy(buf, vs->tight.tight.buffer + (dy * w), w);
} else {
qemu_pixman_linebuf_fill(linebuf, vs->vd->server, w, dy);
qemu_pixman_linebuf_fill(linebuf, vs->vd->server, w, x, y + dy);
}
png_write_row(png_ptr, buf);
}

View File

@@ -2569,7 +2569,7 @@ static int vnc_refresh_server_surface(VncDisplay *vd)
uint8_t *server_ptr;
if (vd->guest.format != VNC_SERVER_FB_FORMAT) {
qemu_pixman_linebuf_fill(tmpbuf, vd->guest.fb, width, y);
qemu_pixman_linebuf_fill(tmpbuf, vd->guest.fb, width, 0, y);
guest_ptr = (uint8_t *)pixman_image_get_data(tmpbuf);
} else {
guest_ptr = guest_row;

View File

@@ -292,7 +292,8 @@ static int xen_add_to_physmap(XenIOState *state,
return -1;
go_physmap:
DPRINTF("mapping vram to %llx - %llx\n", start_addr, start_addr + size);
DPRINTF("mapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx"\n",
start_addr, start_addr + size);
pfn = phys_offset >> TARGET_PAGE_BITS;
start_gpfn = start_addr >> TARGET_PAGE_BITS;
@@ -365,8 +366,8 @@ static int xen_remove_from_physmap(XenIOState *state,
phys_offset = physmap->phys_offset;
size = physmap->size;
DPRINTF("unmapping vram to %llx - %llx, from %llx\n",
phys_offset, phys_offset + size, start_addr);
DPRINTF("unmapping vram to %"HWADDR_PRIx" - %"HWADDR_PRIx", from ",
"%"HWADDR_PRIx"\n", phys_offset, phys_offset + size, start_addr);
size >>= TARGET_PAGE_BITS;
start_addr >>= TARGET_PAGE_BITS;