ppc patch queue 2017-07-11
* Several minor cleanups from Greg Kurz
* Fix for migration of pseries-2.7 and earlier machine types
* More reworking of the DRC hotplug code, fixing several problems
though there are still more to go
* Fixes for CPU family / alias handling on POWER9
* Preliminary patches for POWER9 XIVE (new interrupt controller)
support
* Assorted other fixes
# gpg: Signature made Tue 11 Jul 2017 05:35:16 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170711:
spapr: populate device tree depending on XIVE_EXPLOIT option
spapr: introduce the XIVE_EXPLOIT option in CAS
ppc/kvm: have the "family" CPU alias to point to TYPE_HOST_POWERPC_CPU
spapr: Only report host/guest IOMMU page size mismatches on KVM
spapr: fix memory hotplug error path
target/ppc: Add debug function for radix mmu translation
target/ppc: Refactor tcg radix mmu code
spapr: Use unplug_request for PCI hot unplug
spapr: Remove unnecessary differences between hotplug and coldplug paths
spapr: Add DRC release method
spapr: Uniform DRC reset paths
spapr: Leave DR-indicator management to the guest
target-ppc: SPR_BOOKE_ESR not set on FP exceptions
spapr: fix migration to pseries machine < 2.8
spapr: fix bogus function name in comment
spapr: refresh "platform-specific" hcalls comment
spapr: make spapr_populate_hotplug_cpu_dt() static
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
For v7M, writes to the CONTROL register are only permitted for
privileged code. However even if the code is privileged, the
write must not affect the SPSEL bit in the CONTROL register
if the CPU is in Thread mode (as documented in the pseudocode
for the MSR instruction). Implement this, instead of permitting
SPSEL to be written in all cases.
This was causing mbed applications not to run, because the
RTX RTOS they use relies on this behaviour.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1498820791-8130-1-git-send-email-peter.maydell@linaro.org
When running with KVM enabled, you can choose between emulating the
gic in kernel or user space. If the kernel supports in-kernel virtualization
of the interrupt controller, it will default to that. If not, if will
default to user space emulation.
Unfortunately when running in user mode gic emulation, we miss out on
interrupt events which are only available from kernel space, such as the timer.
This patch leverages the new kernel/user space pending line synchronization for
timer events. It does not handle PMU events yet.
Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Andrew Jones <drjones@redhat.com>
Message-id: 1498577737-130264-1-git-send-email-agraf@suse.de
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The ast2400 contains two and the ast2500 contains three watchdogs.
Add this information to the AspeedSoCInfo and realise the correct number
of watchdogs for that each SoC type.
Signed-off-by: Joel Stanley <joel@jms.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Tested-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add emulation for Exynos4210 Pseudo Random Number Generator which could
work on fixed seeds or with seeds provided by True Random Number
Generator block inside the SoC.
Implement only the fixed seeds part of it in polling mode (no
interrupts).
Emulation tested with two independent Linux kernel exynos-rng drivers:
1. New kcapi-rng interface (targeting Linux v4.12),
2. Old hwrng inteface
# echo "exynos" > /sys/class/misc/hw_random/rng_current
# dd if=/dev/hwrng of=/dev/null bs=1 count=16
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Message-id: 20170425180609.11004-1-krzk@kernel.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
[PMM: wrapped a few overlong lines; more efficient implementation
of exynos4210_rng_seed_ready()]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The content of the backends/trace-events file was entirely
removed in
commit 6b10e573d1
Author: Marc-André Lureau <marcandre.lureau@redhat.com>
Date: Mon May 29 12:39:42 2017 +0400
char: move char devices to chardev/
Leaving the empty file around, causes tracetool to generate
an empty .dtrace file which makes the dtrace compiler throw
a syntax error.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170629162046.4135-1-berrange@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
When XIVE is supported, the device tree should be populated
accordingly and the XIVE memory regions mapped to activate MMIOs.
Depending on the design we choose, we could also allocate different
ICS and ICP objects, or switch between objects. This needs to be
discussed.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
On POWER9, the Client Architecture Support (CAS) negotiation process
determines whether the guest operates in XIVE Legacy compatibility
(the former POWER8 interrupt model) or in XIVE exploitation mode (the
newer POWER9 interrupt model).
Bit 7 of Byte 23 of vector 5 is used for this purpose.
Signed-off-by: Cédric Le Goater <clg@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
When running KVM on POWER, we allow the user to pass "-cpu POWERx" instead
of "-cpu host". This is achieved by patching the ppc_cpu_aliases[] array
so that "POWERx" points to the CPU class with the same PVR as the host CPU.
This causes CPUs to be instantiated from this CPU class instead of the
TYPE_HOST_POWERPC_CPU class which is used with "-cpu host". These CPUs thus
miss all the KVM specific tuning from kvmppc_host_cpu_class_init().
This currently causes QEMU with "-cpu POWER9" to fail when running KVM on a
POWER9 DD1 host:
qemu-system-ppc64: Register sync failed... If you're using kvm-hv.ko, only
"-cpu host" is possible
kvm_init_vcpu failed: Invalid argument
Let's have the "POWERx" alias to point to TYPE_HOST_POWERPC_CPU directly,
so that "-cpu POWERx" instantiates CPUs from the same class as "-cpu host".
Signed-off-by: Greg Kurz <groug@kaod.org>
Tested-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
We print a warning if the spapr IOMMU isn't configured to support a page
size matching the host page size backing RAM. When that's the case we need
more complex logic to translate VFIO mappings, which is slower.
But, it's not so slow that it would be at all noticeable against the
general slowness of TCG. So, only warn when using KVM. This removes some
noisy and unhelpful warnings from make check on hosts with page sizes
which typically differ from those on POWER (e.g. Sparc).
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Thomas Huth <thuth@redhat.com>
QEMU shouldn't abort if spapr_add_lmbs()->spapr_drc_attach() fails.
Let's propagate the error instead, like it is done everywhere else
where spapr_drc_attach() is called.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Daniel Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In target/ppc/mmu-hash64.c there already exists the function
ppc_hash64_get_phys_page_debug() to get the physical (real) address for
a given effective address in hash mode.
Implement the function ppc_radix64_get_phys_page_debug() to allow a real
address to be obtained for a given effective address in radix mode.
This is used when a debugger is attached to qemu.
Previously we just had a comment saying this is unimplemented which then
fell through to the default case and caused an abort due to
unrecognised mmu model as the default had no case for the V3 mmu, which
was misleading at best.
We reuse ppc_radix64_walk_tree() which is used by the radix fault
handler since the process of walking the radix tree is identical.
Reported-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mmu-radix64.c file implements functions to enable the radix mmu
emulation in tcg mode. There is a function ppc_radix64_walk_tree() which
performs the radix tree walk and also implicitly checks the pte
protection.
Move the protection checking of the pte from the ppc_radix64_walk_tree()
function into the caller. This means the ppc_radix64_walk_tree() function
can be used without protection checking which is useful for debugging.
ppc_radix64_walk_tree() no longer needs to take the rwx and prot variables.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
AIUI, ->unplug_request in the HotplugHandler is used for "soft"
unplug, where acknowledgement from the guest is required before
completing the unplug, whereas ->unplug is used for "hard" unplug
where qemu unilaterally removes the device, and the guest just has to
cope with its sudden absence. For spapr we (correctly) use
->unplug_request for CPU and memory hot unplug but we use ->unplug for
PCI.
While I think it might be possible to support "hard" PCI unplug within
the PAPR model, that's not how it actually works now. Although it's
called from ->unplug, the PCI unplug path will usually just mark the
device for removal, with completion of the unplug delayed until
userspace responds to the unplug notification. If the guest doesn't
respond as expected, that could delay the unplug completion arbitrarily
long.
To reflect that, change the PCI unplug path to be called from
->unplug_request. We also rename spapr_phb_hot_plug_child() and
spapr_phb_hot_unplug_child() to spapr_pci_plug() and
spapr_pci_unplug_request() to more obviously reflect the callbacks they're
implementing.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
spapr_drc_attach() has a 'coldplug' parameter which sets the DRC into
configured state initially, instead of the usual ISOLATED/UNUSABLE state.
It turns out this is unnecessary: although coldplugged devices do need to
be in CONFIGURED state once the guest starts, that will already be
accomplished by the reset code which will move DRCs for already plugged
devices into a coldplug equivalent state.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
At the moment, spapr_drc_release() has an ugly switch on the DRC type to
call the right, device-specific release function. This cleans it up by
doing that via a proper QOM method.
It's still arguably an abstraction violation for the DRC code to call into
the specific device code, but one mess at a time.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
DRC objects have a regular device reset method. However, it only gets
called in the usual way for PCI DRCs. Because of where CPU and LMB DRCs
are in the QOM tree, their device reset method isn't automatically called.
So, the machine manually registers reset handlers to call device_reset().
This patch removes the device reset method, and instead always explicitly
registers the reset handler from realize(). This means the callers don't
have to worry about the two cases, and we always get proper resets.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
The DR-indicator is essentially a "virtual LED" attached to a hotpluggable
device, which the guest can set to various states for the attention of
the operator or management layers.
It's mostly guest managed, except that we once-off set it to
ACTIVE/INACTIVE in the attach/detach path. While that makes certain sense,
there's no indication in PAPR that the hypervisor should do this, and the
drmgr code on the guest side doesn't appear to need it (it will already set
the indicator to ACTIVE on hotplug, and INACTIVE on remove).
So, leave the DR-indicator entirely to the guest; the only thing we need
to do is ensure it's in a sane state on reset.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Properly set the book E exception syndrome register when a floating
point exception occurs.
Currently on a book E processor, the POWERPC_EXCP_FP exception handler
fails to set "env->spr[SPR_BOOKE_ESR] = ESR_FP;" as required by the
book E specification.
Signed-off-by: Aaron Larson <alarson@ddci.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
since commit 5c4537bd ("spapr: Fix 2.7<->2.8 migration of PCI host bridge"),
some migration fields are forged from the new ones in spapr_pci_pre_save().
It works well, except when the number of MSI devices is 0,
because in this case the function exits immediately.
This fix moves the migration code before the exit code.
The problem can be reproduced with these commands:
source qemu-2.9:
qemu-system-ppc64 -monitor stdio -M pseries-2.6 -nodefaults -S
destination qemu-2.6:
qemu-system-ppc64 -monitor stdio -M pseries-2.6 -nodefaults \
-incoming tcp:0:4444
on the source:
migrate tcp:localhost:4444
Destination fails with the following error:
qemu-system-ppc64: error while loading state for
instance 0x0 of device 'spapr_pci'
qemu-system-ppc64: load of migration failed: Invalid argument
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
$ git grep spapr_ppc_reset
hw/ppc/spapr.c: * as part of spapr_ppc_reset().
$ git grep ppc_spapr_reset
hw/ppc/spapr.c:static void ppc_spapr_reset(void)
hw/ppc/spapr.c: mc->reset = ppc_spapr_reset;
hw/ppc/spapr_hcall.c: /* If ppc_spapr_reset() did not set up a HPT
but one is necessary
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Since commit ff9006ddbf ("spapr: move spapr_core_[foo]plug() callbacks
close to machine code in spapr.c"), this function doesn't need to be extern
anymore.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Migration pull 2017-07-10
# gpg: Signature made Mon 10 Jul 2017 18:04:57 BST
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-migration-20170710a:
migration: Make compression_threads use save/load_setup/cleanup()
migration: Convert ram to use new load_setup()/load_cleanup()
migration: Create load_setup()/cleanup() methods
migration: Rename cleanup() to save_cleanup()
migration: Rename save_live_setup() to save_setup()
doc: update TYPE_MIGRATION documents
doc: add item for "-M enforce-config-section"
vl: move global property, migrate init earlier
migration: fix handling for --only-migratable
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Once there, I rename ram_migration_cleanup() to ram_save_cleanup().
Notice that this is the first pass, and I only passed XBZRLE to the
new scheme. Moved decoded_buf to inside XBZRLE struct.
As a bonus, I don't have to export xbzrle functions from ram.c.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
--
loaded_data pointer was needed because called can change it (dave)
spell loaded correctly in comment (dave)
Message-Id: <20170628095228.4661-5-quintela@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
We need to do things at load time and at cleanup time.
Signed-off-by: Juan Quintela <quintela@redhat.com>
--
Move the printing of the error message so we can print the device
giving the error.
Add call to postcopy stuff
Message-Id: <20170628095228.4661-4-quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Currently drive_init_func() may call migrate_get_current() while the
migrate object is still not ready yet at that time. Move the migration
object init earlier, along with the global properties, right after
acceleration init.
This fixes a breakage for iotest 055, which caused an assertion failure.
Reported-by: Max Reitz <mreitz@redhat.com>
Reported-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Tested-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Fixes: 3df663 ("migration: move only_migratable to MigrationState")
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1499242883-2184-3-git-send-email-peterx@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Intel 82599 VFs report a PCIe capability version of 0, which is
invalid. The earliest version of the PCIe spec used version 1. This
causes Windows to fail startup on the device and it will be disabled
with error code 10. Our choices are either to drop the PCIe cap on
such devices, which has the side effect of likely preventing the guest
from discovering any extended capabilities, or performing a fixup to
update the capability to the earliest valid version. This implements
the latter.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
VFIOGroup.device_list is effectively our reference tracking mechanism
such that we can teardown a group when all of the device references
are removed. However, we also use this list from our machine reset
handler for processing resets that affect multiple devices. Generally
device removals are fully processed (exitfn + finalize) when this
reset handler is invoked, however if the removal is triggered via
another reset handler (piix4_reset->acpi_pcihp_reset) then the device
exitfn may run, but not finalize. In this case we hit asserts when
we start trying to access PCI helpers since much of the PCI state of
the device is released. To resolve this, add a pointer to the Object
DeviceState in our common base-device and skip non-realized devices
as we iterate.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Let NBD use the trace mechanisms already present in qemu. Now you can
use the -trace optino of qemu, or the -T/--trace option of qemu-img,
qemu-io, and qemu-nbd, to select nbd traces. For qemu, the QMP commands
trace-event-{get,set}-state can also toggle tracing on the fly.
Example:
qemu-nbd --trace 'nbd_*' <image file> # enables all nbd traces
Recompilation with CFLAGS=-DDEBUG_NBD is no more needed, furthermore,
DEBUG_NBD macro is removed from the code.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170707152918.23086-11-vsementsov@virtuozzo.com>
[eblake: minor tweaks to a couple of traces]
Signed-off-by: Eric Blake <eblake@redhat.com>
Combine two successive "if (oldStyle) {...} else {...}" into one.
Block "if (client->tlscreds)" under "if (oldStyle)" is unreachable,
as we have "oldStyle = client->exp != NULL && !client->tlscreds;".
So, delete this block.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170707152918.23086-3-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Separate the case when a client sends NBD_OPT_ABORT from all other
errors. It will be needed for the following patch, where errors will be
reported.
This particular case is not actually an error - it honestly follows the
NBD protocol. Therefore it should not be reported like an error.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707152918.23086-2-vsementsov@virtuozzo.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
We are promising more than just odd fixes, and Paolo is hoping
to offload the pull requests to me. Also, enough of NBD is related
to the block layer that it is worth including qemu-block on patches.
While at it, include blockdev-nbd.c and qemu-nbd.texi in the set
of maintained files.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170707182151.29872-1-eblake@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Block layer patches
# gpg: Signature made Mon 10 Jul 2017 12:26:44 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (40 commits)
block: Make bdrv_is_allocated_above() byte-based
block: Minimize raw use of bds->total_sectors
block: Make bdrv_is_allocated() byte-based
backup: Switch backup_run() to byte-based
backup: Switch backup_do_cow() to byte-based
backup: Switch block_backup.h to byte-based
backup: Switch BackupBlockJob to byte-based
block: Drop unused bdrv_round_sectors_to_clusters()
mirror: Switch mirror_iteration() to byte-based
mirror: Switch mirror_do_read() to byte-based
mirror: Switch mirror_cow_align() to byte-based
mirror: Update signature of mirror_clip_sectors()
mirror: Switch mirror_do_zero_or_discard() to byte-based
mirror: Switch MirrorBlockJob to byte-based
commit: Switch commit_run() to byte-based
commit: Switch commit_populate() to byte-based
stream: Switch stream_run() to byte-based
stream: Drop reached_end for stream_complete()
stream: Switch stream_populate() to byte-based
trace: Show blockjob actions via bytes, not sectors
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the signature of the function to use int64_t *pnum ensures
that the compiler enforces that all callers are updated. For now,
the io.c layer still assert()s that all callers are sector-aligned,
but that can be relaxed when a later patch implements byte-based
block status. Therefore, for the most part this patch is just the
addition of scaling at the callers followed by inverse scaling at
bdrv_is_allocated(). But some code, particularly stream_run(),
gets a lot simpler because it no longer has to mess with sectors.
Leave comments where we can further simplify by switching to
byte-based iterations, once later patches eliminate the need for
sector-aligned operations.
For ease of review, bdrv_is_allocated() was tackled separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
bdrv_is_allocated_above() was relying on intermediate->total_sectors,
which is a field that can have stale contents depending on the value
of intermediate->has_variable_length. An audit shows that we are safe
(we were first calling through bdrv_co_get_block_status() which in
turn calls bdrv_nb_sectors() and therefore just refreshed the current
length), but it's nicer to favor our accessor functions to avoid having
to repeat such an audit, even if it means refresh_total_sectors() is
called more frequently.
Suggested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually moving away from sector-based interfaces, towards
byte-based. In the common case, allocation is unlikely to ever use
values that are not naturally sector-aligned, but it is possible
that byte-based values will let us be more precise about allocation
at the end of an unaligned file that can do byte-based access.
Changing the signature of the function to use int64_t *pnum ensures
that the compiler enforces that all callers are updated. For now,
the io.c layer still assert()s that all callers are sector-aligned
on input and that *pnum is sector-aligned on return to the caller,
but that can be relaxed when a later patch implements byte-based
block status. Therefore, this code adds usages like
DIV_ROUND_UP(,BDRV_SECTOR_SIZE) to callers that still want aligned
values, where the call might reasonbly give non-aligned results
in the future; on the other hand, no rounding is needed for callers
that should just continue to work with byte alignment.
For the most part this patch is just the addition of scaling at the
callers followed by inverse scaling at bdrv_is_allocated(). But
some code, particularly bdrv_commit(), gets a lot simpler because it
no longer has to mess with sectors; also, it is now possible to pass
NULL if the caller does not care how much of the image is allocated
beyond the initial offset. Leave comments where we can further
simplify once a later patch eliminates the need for sector-aligned
requests through bdrv_is_allocated().
For ease of review, bdrv_is_allocated_above() will be tackled
separately.
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Change the internal
loop iteration of backups to track by bytes instead of sectors
(although we are still guaranteed that we iterate by steps that
are cluster-aligned).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Convert another internal
function (no semantic change).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Continue by converting
the public interface to backup jobs (no semantic change), including
a change to CowRequest to track by bytes instead of cluster indices.
Note that this does not change the difference between the public
interface (starting point, and size of the subsequent range) and
the internal interface (starting and end points).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Xie Changlong <xiechanglong@cmss.chinamobile.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Continue by converting an
internal structure (no semantic change), and all references to
tracking progress. Drop a redundant local variable bytes_per_cluster.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Now that the last user [mirror_iteration()] has converted to using
bytes, we no longer need a function to round sectors to clusters.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Change the internal
loop iteration of mirroring to track by bytes instead of sectors
(although we are still guaranteed that we iterate by steps that
are both sector-aligned and multiples of the granularity). Drop
the now-unused mirror_clip_sectors().
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Convert another internal
function, preserving all existing semantics, and adding one more
assertion that things are still sector-aligned (so that conversions
to sectors in mirror_read_complete don't need to round).
Signed-off-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Convert another internal
function (no semantic change), and add mirror_clip_bytes() as a
counterpart to mirror_clip_sectors(). Some of the conversion is
a bit tricky, requiring temporaries to convert between units; it
will be cleared up in a following patch.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Rather than having a void function that modifies its input
in-place as the output, change the signature to reduce a layer
of indirection and return the result.
Suggested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Convert another internal
function (no semantic change).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Continue by converting an
internal structure (no semantic change), and all references to the
buffer size.
Add an assertion that our use of s->granularity >> BDRV_SECTOR_BITS
(necessary for interaction with sector-based dirty bitmaps, until
a later patch converts those to be byte-based) does not suffer from
truncation problems.
[checkpatch has a false positive on use of MIN() in this patch]
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Change the internal
loop iteration of committing to track by bytes instead of sectors
(although we are still guaranteed that we iterate by steps that
are sector-aligned).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Start by converting an
internal function (no semantic change).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Change the internal
loop iteration of streaming to track by bytes instead of sectors
(although we are still guaranteed that we iterate by steps that
are sector-aligned).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
stream_complete() skips the work of rewriting the backing file if
the job was cancelled, if data->reached_end is false, or if there
was an error detected (non-zero data->ret) during the streaming.
But note that in stream_run(), data->reached_end is only set if the
loop ran to completion, and data->ret is only 0 in two cases:
either the loop ran to completion (possibly by cancellation, but
stream_complete checks for that), or we took an early goto out
because there is no bs->backing. Thus, we can preserve the same
semantics without the use of reached_end, by merely checking for
bs->backing (and logically, if there was no backing file, streaming
is a no-op, so there is no backing file to rewrite).
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are gradually converting to byte-based interfaces, as they are
easier to reason about than sector-based. Start by converting an
internal function (no semantic change).
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Upcoming patches are going to switch to byte-based interfaces
instead of sector-based. Even worse, trace_backup_do_cow_enter()
had a weird mix of cluster and sector indices.
The trace interface is low enough that there are no stability
guarantees, and therefore nothing wrong with changing our units,
even in cases like trace_backup_do_cow_skip() where we are not
changing the trace output. So make the tracing uniformly use
bytes.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The user interface specifies job rate limits in bytes/second.
It's pointless to have our internal representation track things
in sectors/second, particularly since we want to move away from
sector-based interfaces.
Fix up a doc typo found while verifying that the ratelimit
code handles the scaling difference.
Repetition of expressions like 'n * BDRV_SECTOR_SIZE' will be
cleaned up later when functions are converted to iterate over
images by bytes rather than by sectors.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Jeff Cody <jcody@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We likely do not want to carry these legacy -drive options along forever.
Let's emit a deprecation warning for the -drive options that have a
replacement with the -device option, so that the (hopefully few) remaining
users are aware of this and can adapt their scripts / behaviour accordingly.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The '-e' and '-6' options to the 'create' & 'convert' commands were
"deprecated" in favour of the more generic '-o' option many years ago:
commit eec77d9e71
Author: Jes Sorensen <Jes.Sorensen@redhat.com>
Date: Tue Dec 7 17:44:34 2010 +0100
qemu-img: Deprecate obsolete -6 and -e options
Except this was never actually a deprecation, which would imply giving
the user a warning while the functionality continues to work for a
number of releases before eventual removal. Instead the options were
immediately turned into an error + exit. Given that the functionality
is already broken, there's no point in keeping these psuedo-deprecation
messages around any longer.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
According to specification:
"'MSWIN4.1' is the recommanded setting, because it is the setting least likely
to cause compatibility problems. If you want to put something else in here,
that is your option, but the result may be that some FAT drivers might not
recognize the volume."
Specification: "FAT: General overview of on-disk format" v1.03, page 9
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Specification: "FAT: General overview of on-disk format" v1.03, page 23
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
FAT12/FAT16 root directory is two sectors in size, which allows only 512 directory entries.
Prevent QEMU startup if too much files exist, instead of overflowing root directory.
Also introduce variable root_entries, which will be required for FAT32.
Fixes: https://bugs.launchpad.net/qemu/+bug/1599539/comments/4
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
More specifically:
- try without numeric-tail only if LFN didn't have invalid short chars
- start at ~1 (instead of ~0)
- handle case if numeric tail is more than one char (ie > 10)
Windows 9x Scandisk doesn't see anymore mismatches between short file names and
long file names for non-ASCII filenames.
Specification: "FAT: General overview of on-disk format" v1.03, page 31
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
More specifically, create short name from filename and change blacklist of
invalid chars to whitelist of valid chars.
Windows 9x also now correctly see long file names of filenames containing a space,
but Scandisk still complains about mismatch between SFN and LFN.
[kwolf: Build fix for this intermediate patch (it included declarations
for variables that are only used in the next patch) ]
Specification: "FAT: General overview of on-disk format" v1.03, pages 30-31
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Assume that input filename is encoded as UTF-8, so correctly create UTF-16 encoding.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
readdir() doesn't always return . and .. entries at first and in that order.
This leads to not creating them at first in the directory, which raises some
errors on file system checking utilities like MS-DOS Scandisk.
Specification: "FAT: General overview of on-disk format" v1.03, page 25
Fixes: https://bugs.launchpad.net/qemu/+bug/1599539
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Specification: "FAT: General overview of on-disk format" v1.03, pages 11-13
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
- offset_to_bootsector is the number of sectors up to FAT bootsector
- offset_to_fat is the number of sectors up to first File Allocation Table
- offset_to_root_dir is the number of sectors up to root directory sector
Replace first_sectors_number - 1 by offset_to_bootsector.
Replace first_sectors_number by offset_to_fat.
Replace faked_sectors by offset_to_rootdir.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
MODE_FAKED and MODE_RENAMED are not and were never used.
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This was a complete mess. On 2299 indented lines:
- 1329 were with spaces only
- 617 with tabulations only
- 353 with spaces and tabulations
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
- bs->total_sectors is the number of sectors of the whole disk
- s->sector_count is the number of sectors of the FAT partition
This fixes the following assert in qemu-img map:
qemu-img.c:2641: get_block_status: Assertion `nb_sectors' failed.
This also fixes an infinite loop in qemu-img convert.
Fixes: 4480e0f924
Fixes: https://bugs.launchpad.net/qemu/+bug/1599539
Cc: qemu-stable@nongnu.org
Signed-off-by: Hervé Poussineau <hpoussin@reactos.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Without a passthrough status of BDRV_BLOCK_RAW, anything wrapped by
blkdebug appears 100% allocated as data. Better is treating it the
same as the underlying file being wrapped.
Update iotest 177 for the new expected output.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The lone caller that cares about a return of BDRV_BLOCK_RAW
(namely, io.c:bdrv_co_get_block_status) completely replaces the
return value, so there is no point in passing BDRV_BLOCK_DATA.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We document that *file is valid if the return is not an error and
includes BDRV_BLOCK_OFFSET_VALID, but forgot to obey this contract
when a driver (such as blkdebug) lacks a callback. Messed up in
commit 67a0fd2 (v2.6), when we added the file parameter.
Enhance qemu-iotest 177 to cover this, using a sequence that would
print garbage or even SEGV, because it was dererefencing through
uninitialized memory. [The resulting test output shows that we
have less-than-ideal block status from the blkdebug driver, but
that's a separate fix coming up soon.]
Setting *file on all paths that return BDRV_BLOCK_OFFSET_VALID is
enough to fix the crash, but we can go one step further: always
setting *file, even on error, means that a broken caller that
blindly dereferences file without checking for error is now more
likely to get a reliable SEGV instead of randomly acting on garbage,
making it easier to diagnose such buggy callers. Adding an
assertion that file is set where expected doesn't hurt either.
CC: qemu-stable@nongnu.org
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Most callback commands in qemu-io return 0 to keep the interpreter
loop running, or 1 to quit immediately. However, open_f() just
passed through the return value of openfile(), which has different
semantics of returning 0 if a file was opened, or 1 on any failure.
As a result of mixing the return semantics, we are forcing the
qemu-io interpreter to exit early on any failures, which is rather
annoying when some of the failures are obviously trying to give
the user a hint of how to proceed (if we didn't then kill qemu-io
out from under the user's feet):
$ qemu-io
qemu-io> open foo
qemu-io> open foo
file open already, try 'help close'
$ echo $?
0
In general, we WANT openfile() to report failures, since it is the
function used in the form 'qemu-io -c "$something" no_such_file'
for performing one or more -c options on a single file, and it is
not worth attempting $something if the file itself cannot be opened.
So the solution is to fix open_f() to always return 0 (when we are
in interactive mode, even failure to open should not end the
session), and save the return value of openfile() for command line
use in main().
Note, however, that we do have some qemu-iotests that do 'qemu-io
-c "open file" -c "$something"'; such tests will now proceed to
attempt $something whether or not the open succeeded, the same way
as if the two commands had been attempted in interactive mode. As
such, the expected output for those tests has to be modified. But it
also means that it is now possible to use -c close and have a single
qemu-io command line operate on more than one file even without
using interactive mode. Although the '-c open' action is a subtle
change in behavior, remember that qemu-io is for debugging purposes,
so as long as it serves the needs of qemu-iotests while still being
reasonable for interactive use, it should not be a problem that we
are changing tests to the new behavior.
This has been awkward since at least as far back as commit
e3aff4f, in 2009.
Signed-off-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Clang generates the following warning on aarch64 host:
CC util/cacheinfo.o
/home/pranith/qemu/util/cacheinfo.c:121:48: warning: value size does not match register size specified by the constraint and modifier [-Wasm-operand-widths]
asm volatile("mrs\t%0, ctr_el0" : "=r"(ctr));
^
/home/pranith/qemu/util/cacheinfo.c:121:28: note: use constraint modifier "w"
asm volatile("mrs\t%0, ctr_el0" : "=r"(ctr));
^~
%w0
Constraint modifier 'w' is not (yet?) accepted by gcc. Fix this by increasing the ctr size.
Tested-by: Emilio G. Cota <cota@braap.org>
Reviewed-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20170630153946.11997-1-bobby.prani@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We use ADRP+ADD to compute the target address for goto_tb. This patch
introduces the NOP instruction which is used to align the above
instruction pair so that we can use one atomic instruction to patch
the destination offsets.
CC: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Pranith Kumar <bobby.prani@gmail.com>
Message-Id: <20170630143614.31059-2-bobby.prani@gmail.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
When the guest unplugs the emulated NICs, cleanup the peer for each NIC
as it is not needed anymore. Most importantly, this allows the tap
interfaces which QEMU holds open to be closed and removed.
Signed-off-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Initialize xenfb properly, as all other backends, from its own
"initialise" function.
Remove the dependency of vkbd on vfb: use qemu_console_lookup_by_index
to find the principal console (to get the size of the screen) instead of
relying on a vfb backend to be available (which adds a dependency
between the two).
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
* qemu-thread portability improvement (Fam)
* virtio-scsi IOMMU fix (Jason)
* poisoning and common-obj-y cleanups (Thomas)
* initial Hypervisor.framework refactoring (Sergio)
* x86 TCG interrupt injection fixes (Wu Xiang, me)
* --disable-tcg support for x86 (Yang Zhong, me)
* various other bugfixes and cleanups (Daniel, Peter, Thomas)
# gpg: Signature made Wed 05 Jul 2017 08:12:56 BST
# gpg: using RSA key 0xBFFBD25F78C7AE83
# gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>"
# gpg: aka "Paolo Bonzini <pbonzini@redhat.com>"
# Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1
# Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (42 commits)
target/i386: add the CONFIG_TCG into Makefiles
target/i386: add the tcg_enabled() in target/i386/
target/i386: move TLB refill function out of helper.c
target/i386: split cpu_set_mxcsr() and make cpu_set_fpuc() inline
target/i386: make cpu_get_fp80()/cpu_set_fp80() static
target/i386: move cpu_sync_bndcs_hflags() function
tcg: add the CONFIG_TCG into Makefiles
tcg: add CONFIG_TCG guards in headers
exec: elide calls to tb_lock and tb_unlock
tcg: move tb_lock out of translate-all.h
tcg: add the tcg-stub.c file into accel/stubs/
vapic: use tcg_enabled
monitor: disable "info jit" and "info opcount" if !TCG
tcg: make tcg_allowed global
cpu: move interrupt handling out of translate-common.c
tcg: move page_size_init() function
vl: add tcg_enabled() for tcg related code
vl: convert -tb-size to qemu_strtoul
configure: add --disable-tcg configure option
configure: early test for supported targets
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This patch is based on a similar patch from Stefan Hajnoczi -
commit c324fd0a39 ("virtio-pci: use ioeventfd even when KVM is disabled")
Do not check kvm_eventfds_enabled() when KVM is disabled since it
always returns 0. Since commit 8c56c1a592
("memory: emulate ioeventfd") it has been possible to use ioeventfds in
qtest or TCG mode.
This patch makes -device virtio-scsi-ccw,iothread=iothread0 work even
when KVM is disabled.
Currently we don't have an equivalent to "memory: emulate ioeventfd"
for ccw yet, but that this doesn't hurt and qemu-iotests 068 can pass with
skipping iothread arguments.
I have tested that virtio-scsi-ccw works under tcg both with and without
iothread.
This patch fixes qemu-iotests 068, which was accidentally merged early
despite the dependency on ioeventfd.
Signed-off-by: QingFeng Hao <haoqf@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Message-Id: <20170704132350.11874-2-haoqf@linux.vnet.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
The response for query-cpu-definitions didn't include the
unavailable-features field, which is used by libvirt to figure
out whether a certain cpu model is usable on the host.
The unavailable features are now computed by obtaining the host CPU
model and comparing it against the known CPU models. The comparison
takes into account the generation, the GA level and the feature
bitmaps. In the case of a CPU generation/GA level mismatch
a feature called "type" is reported to be missing.
As a result, the output of virsh domcapabilities would change
from something like
...
<mode name='custom' supported='yes'>
<model usable='unknown'>z10EC-base</model>
<model usable='unknown'>z9EC-base</model>
<model usable='unknown'>z196.2-base</model>
<model usable='unknown'>z900-base</model>
<model usable='unknown'>z990</model>
...
to
...
<mode name='custom' supported='yes'>
<model usable='yes'>z10EC-base</model>
<model usable='yes'>z9EC-base</model>
<model usable='no'>z196.2-base</model>
<model usable='yes'>z900-base</model>
<model usable='yes'>z990</model>
...
Signed-off-by: Viktor Mihajlovski <mihajlov@linux.vnet.ibm.com>
Message-Id: <1499082529-16970-1-git-send-email-mihajlov@linux.vnet.ibm.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Commit f6f4ce4211 ("s390x: add property adapter_routes_max_batch",
2016-12-09) introduces a common realize (intended to be common for all
the subclasses) for flic, but fails to make sure the kvm-flic which had
its own is actually calling this common realize.
This omission fortunately does not result in a grave problem. The common
realize was only supposed to catch a possible programming mistake by
validating a value of a property set via the compat machine macros. Since
there was no programming mistake we don't need this fixed for stable.
Let's fix this problem by making sure kvm flic honors the realize of its
parent class.
Let us also improve on the error message we would hypothetically emit
when the validation fails.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Fixes: f6f4ce4211 ("s390x: add property adapter_routes_max_batch")
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Reviewed-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
From the moment it was introduced by commit a2875e6f98 ("s390x/kvm:
implement floating-interrupt controller device", 2013-07-16) the kvm-flic
is not making realize fail properly in case it's impossible to create the
KVM device which basically serves as a backend and is absolutely
essential for having an operational kvm-flic.
Let's fix this by making sure we do proper error propagation in realize.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Fixes: a2875e6f98 "s390x/kvm: implement floating-interrupt controller device"
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Reviewed-by: Yi Min Zhao <zyimin@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Commit bab482d740 ("s390x/css: ccw translation infrastructure")
introduced instruction interception handler for different types of
subchannels. For emulated 3270 devices, we should assign the virtual
subchannel handler to them during device realization process, or 3270
will not work.
Fixes: bab482d740 ("s390x/css: ccw translation infrastructure")
Reviewed-by: Jing Liu <liujbjl@linux.vnet.ibm.com>
Reviewed-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Let's vmstatify virtio_ccw_save_config and virtio_ccw_load_config for
flexibility (extending using subsections) and for fun.
To achieve this we need to hack the config_vector, which is VirtIODevice
(that is common virtio) state, in the middle of the VirtioCcwDevice state
representation. This is somewhat ugly, but we have no choice because the
stream format needs to be preserved.
Almost no changes in behavior. Exception is everything that comes with
vmstate like extra bookkeeping about what's in the stream, and maybe some
extra checks and better error reporting.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Message-Id: <20170703213414.94298-1-pasic@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Add the CONFIG_TCG for frontend and backend's files in the related
Makefiles.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add the tcg_enabled() where the x86 target needs to disable
TCG-specific code.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This function calls tlb_set_page_with_attrs, which is not available
when TCG is disabled. Move it to excp_helper.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Split the cpu_set_mxcsr() and make cpu_set_fpuc() inline with specific
tcg code.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move cpu_get_fp80()/cpu_set_fp80() from fpu_helper.c to
machine.c because fpu_helper.c will be disabled if tcg is
disabled in the build.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move cpu_sync_bndcs_hflags() function from mpx_helper.c
to helper.c because mpx_helper.c need be disabled when
tcg is disabled.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add the CONFIG_TCG for frontend and backend's files in the related
Makefiles.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Add CONFIG_TCG around TLB-related functions and structure declarations.
Some of these functions are defined in ./accel/tcg/cputlb.c, which will
not be linked in if TCG is disabled, and have no stubs; therefore, their
callers will also be compiled out for --disable-tcg.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
If tcg is disabled, the functions in tcg-stub.c file will be called.
This file is target-independent file, do not include any platform
related stub functions into this file.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Change the tcg_enabled() and make sure user build still enable tcg
even x86 softmmu disable tcg.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
translate-common.c will not be available anymore with --disable-tcg,
so we cannot leave cpu_interrupt_handler there.
Move the TCG-specific handler to accel/tcg/tcg-all.c, and adopt
KVM's handler as the default one, since it works just as well for
Xen and qtest.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
translate-all.c will be disabled if tcg is disabled in the build,
so page_size_init() function and related variables will be moved
to exec.c file.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Need to disable the tcg related code in the vl.c if the
disable-tcg option is added into ./configure command.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This lets you build without TCG (hardware accelerationor qtest only). When
this flag is passed to configure, it will automatically filter out the target
list to only those that support KVM or Xen or HAX.
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Check for unsupported targets in target_list, and print an
error early in the configuration process.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This will be useful when the functions are called, early in the configure
process, to filter out targets that do not support hardware acceleration.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Not all platforms check whether a lock is initialized before used. In
particular Linux seems to be more permissive than OSX.
Check initialization state explicitly in our code to catch such bugs
earlier.
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170704122325.25634-1-famz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Using signal to establish a signal handler is not portable; on
SysV systems, the signal handler would be reset to SIG_DFL after
delivery, while BSD preserves the signal handler. Daniel Berrange
reported that (to complicate matters further) the signal system call
has SysV behavior, but glibc signal() actually calls the sigaction
system call to provide BSD behavior.
However, using signal() to set a signal's disposition to SIG_DFL
or SIG_IGN is portable and is a relatively common occurrence in
QEMU source code, so allow that.
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In QEMU's main_loop() we used to check whether we should do
a nonblocking call to main_loop(); this was deleted in commit e330c118f2,
because now that vCPUs always drop the I/O thread lock it is an unnecessary
optimization.
The loop in test-char.c copied the old QEMU main_loop() code, but
here the nonblocking check has never been necessary because this
standalone test case doesn't hold the I/O lock anyway. Remove it,
so we can drop the main_loop_wait() return value.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1498584769-12439-2-git-send-email-peter.maydell@linaro.org>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The original ready < nhandles - 1 can be re-written as ready + 1 <
nhandles. The check was actually incorrect because
WAIT_OBJECT_0 was not subtracted from ready; it worked because
WAIT_OBJECT_0 is zero. After subtracting WAIT_OBJECT_0,
the result is the same condition that we are checking on the first
itteration of the for loop. This means we can remove the if statement
and let the for loop check the code.
Signed-off-by: Alistair Francis <alistair.francis@xilinx.com>
Message-Id: <a14083d681951f3999a0e9314605cb706381ae8d.1498756113.git.alistair.francis@xilinx.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
The 'sun_path' field in the sockaddr_un struct is not required
to be NUL termianted, so when reporting an error, we must use
the separate 'path' variable which is guaranteed terminated.
Fixes a bug spotted by coverity that was introduced in
commit ad9579aaa1
Author: Daniel P. Berrange <berrange@redhat.com>
Date: Thu May 25 16:53:00 2017 +0100
sockets: improve error reporting if UNIX socket path is too long
Reviewed-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Message-Id: <20170626103756.22974-1-berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit 1f5c00cfdb ("qom/cpu: move tlb_flush to cpu_common_reset")
moved the call to tlb_flush() from the target-specific reset handlers
into the common code qom/cpu.c file, and protected the call with
"#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user
only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is
*never* defined here, so the tlb_flush() was simply never executed
anymore. Fix it by introducing a wrapper for tlb_flush() in a file
that is re-compiled for each target, i.e. in translate-all.c.
Fixes: 1f5c00cfdb
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1498454578-18709-5-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
CONFIG_KVM is only defined for target-specific code, so nobody should
use it by accident in common code. To avoid such subtle bugs,
CONFIG_KVM is now marked as poisoned in common code. The header
include/sysemu/kvm.h is somewhat special since it is included
all over the place from common code, too, so we need some extra
logic via "#ifdef NEED_CPU_H" here to make sure that we can
compile all files without problems.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1498454578-18709-4-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
pc.h and sysemu/kvm.h are also included from common code (where
CONFIG_KVM is not available), so the #defines that depend on CONFIG_KVM
should not be declared here to avoid that anybody is using them in a
wrong way. Since we're also going to poison CONFIG_KVM for common code,
let's move them to kvm_i386.h instead. Most of the dummy definitions
from sysemu/kvm.h are also unused since the code that uses them is
only compiled for CONFIG_KVM (e.g. target/i386/kvm.c), so the unused
defines are also simply dropped here instead of being moved.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1498454578-18709-3-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Move the handling of conforming code segments before the handling
of stack switch.
Because dpl == cpl after the new "if", it's now unnecessary to check
the C bit when testing dpl < cpl. Furthermore, dpl > cpl is checked
slightly above the modified code, so the final "else" is unreachable
and we can remove it.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
In do_interrupt64(), when interrupt stack table(ist) is enabled
and the the target code segment is conforming(e2 & DESC_C_MASK), the
old implementation always set new CPL to 0, and SS.RPL to 0.
This is incorrect for when CPL3 code access a CPL0 conforming code
segment, the CPL should remain unchanged. Otherwise higher privileged
code can be compromised.
The patch fix this for always set dpl = cpl when the target code segment
is conforming, and modify the last parameter `flags`, which contains
correct new CPL, in cpu_x86_load_seg_cache().
Signed-off-by: Wu Xiang <willx8@gmail.com>
Message-Id: <20170621142152.GA18094@wxdeubuntu.ipads-lab.se.sjtu.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When attaching the NBD QIOChannel to an AioContext, the TLS channel should
be used, not the underlying socket channel. This is because, trivially,
the TLS channel will be the one that we read/write to and thus the one
that will get the qio_channel_yield() call.
Fixes: ff82911cd3
Cc: qemu-stable@nongnu.org
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Tested-by: Daniel P. Berrange <berrange@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Since commit 3f2ce724f1 ("Move the qemu-ga description into a
separate chapter"), the qemu.1 man page looks pretty much screwed
up, e.g. the title was "qemu-ga - QEMU Guest Agent" instead of
"qemu-doc - QEMU Emulator User Documentation". However, that movement
of the gemu-ga chapter is not the real problem, it just triggered
another bug in the qemu-doc.texi: There are some parts in the file
which introduce a "@c man begin OPTIONS" section, but never close
it again with "@c man end". After adding the proper end tags here,
the title of the man page is right again and the previously wrongly
tagged sections now also show up correctly in the man page, too.
Reported-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1497863771-24929-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
edgar/xilinx-next.for-upstream
# gpg: Signature made Tue 04 Jul 2017 10:00:47 BST
# gpg: using RSA key 0x29C596780F6BCA83
# gpg: Good signature from "Edgar E. Iglesias (Xilinx key) <edgar.iglesias@xilinx.com>"
# gpg: aka "Edgar E. Iglesias <edgar.iglesias@gmail.com>"
# Primary key fingerprint: AC44 FEDC 14F7 F1EB EDBF 4151 29C5 9678 0F6B CA83
* remotes/edgar/tags/edgar/xilinx-next.for-upstream:
xilinx-dp: Add support for the yuy2 video format
target-microblaze: Add CPU version 10.0
target-microblaze: dec_barrel: Add BSIFI
target-microblaze: dec_barrel: Add BSEFI
target-microblaze: dec_barrel: Plug TCG temp leak
target-microblaze: dec_barrel: Add braces around if-statements
target-microblaze: dec_barrel: Use extract32
target-microblaze: dec_barrel: Use bool instead of unsigned int
target-microblaze: Introduce a use-pcmp-instr property
target-microblaze: Introduce a use-msr-instr property
target-microblaze: Introduce a use-hw-mul property
target-microblaze: Introduce a use-div property
target-microblaze: Introduce a use-barrel property
target-microblaze: Add CPU versions 9.4, 9.5 and 9.6
target-microblaze: Don't hard code 0xb as initial MB version
target-microblaze: Correct bit shift for the PVR0 version field
disas/microblaze: Add missing 'const' attributes
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
pc, acpi, pci, virtio: fixes, cleanups, features, tests
Some fixes and cleanups. New tests.
Configurable tx queue size for virtio-net.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
# gpg: Signature made Mon 03 Jul 2017 20:43:17 BST
# gpg: using RSA key 0x281F0DB8D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg: aka "Michael S. Tsirkin <mst@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 0270 606B 6F3C DF3D 0B17 0970 C350 3912 AFBE 8E67
# Subkey fingerprint: 5D09 FD08 71C8 F85B 94CA 8A0D 281F 0DB8 D28D 5469
* remotes/mst/tags/for_upstream: (21 commits)
i386/acpi: update expected acpi files
virtio-net: fix tx queue size for !vhost-user
tests: Add unit tests for the VM Generation ID feature
vhost-user: unregister slave req handler at cleanup time
vhost: ensure vhost_ops are set before calling iotlb callback
intel_iommu: fix migration breakage on mr switch
hw/acpi: remove dead acpi code
fw_cfg: move setting of FW_CFG_VERSION_DMA bit to fw_cfg_init1()
fw_cfg: don't map the fw_cfg IO ports in fw_cfg_io_realize()
i386/kvm/pci-assign: Use errp directly rather than local_err
i386/kvm/pci-assign: Fix return type of verify_irqchip_kernel()
pci: Convert shpc_init() to Error
pci: Convert to realize
pci: Replace pci_add_capability2() with pci_add_capability()
pci: Make errp the last parameter of pci_add_capability()
pci: Fix the wrong assertion.
pci: Add comment for pci_add_capability2()
pci: Clean up error checking in pci_add_capability()
intel_iommu: relax iq tail check on VTD_GCMD_QIE enable
hw/pci-bridge/dec: Classify the DEC PCI bridge as bridge device
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Add braces around if-statements.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use extract32 instead of opencoding the shifting and masking.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Use bool instead of unsigned int to represent flags.
No functional change.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Making the opcode list 'const' saves memory.
Some function arguments and local variables needed 'const', too.
Add also 'static' to two local functions.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: Stefan Weil <sw@weilnetz.de>
[EI: Removed old prototypes to fix the build]
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
We dropped some dead code, update extected table binaries.
Fixes: 4d7e7f2702 ("hw/acpi: remove dead acpi code")
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Current code segfaults when no nic peer is specified.
Fix it up - fall back to default queue size.
Fixes: 9b02e1618c ("virtio-net: enable configurable tx queue size")
Cc: Wei Wang <wei.w.wang@intel.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The following tests are implemented:
* test that a GUID passed in by command line is propagated to the guest.
Read the GUID from guest memory
* test that the "auto" argument to the GUID generates a valid GUID, as
seen by the guest.
* test that a GUID passed in can be queried from the monitor
This patch is loosely based on a previous patch from:
Gal Hammer <ghammer@redhat.com> and Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Ben Warren <ben@skyportsystems.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
If the backend sends a request just before closing the socket,
the aio dispatcher might schedule its reading after the vhost
device has been cleaned, leading to a NULL pointer dereference
in slave_read();
vhost_user_cleanup() already closes the socket but it is not
enough, the handler has to be unregistered.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This patch fixes a crash that happens when vhost-user iommu
support is enabled and vhost-user socket is closed.
When it happens, if an IOTLB invalidation notification is sent
by the IOMMU, vhost_ops's NULL pointer is dereferenced.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Migration is broken after the vfio integration work:
qemu-kvm: AHCI: Failed to start FIS receive engine: bad FIS receive buffer address
qemu-kvm: Failed to load ich9_ahci:ahci
qemu-kvm: error while loading state for instance 0x0 of device '0000:00:1f.2/ich9_ahci'
qemu-kvm: load of migration failed: Operation not permitted
The problem is that vfio work introduced dynamic memory region
switching (actually it is also used for future PT mode), and this memory
region layout is not properly delivered to destination when migration
happens. Solution is to rebuild the layout in post_load.
Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1459906
Fixes: 558e0024 ("intel_iommu: allow dynamic switch of IOMMU region")
Reviewed-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The setting of the FW_CFG_VERSION_DMA bit is the same across both the
TYPE_FW_CFG_MEM and TYPE_FW_CFG_IO devices, so unify the logic in
fw_cfg_init1().
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Gabriel Somlo <somlo@cmu.edu>
As indicated by Laszlo it is a QOM bug for the realize() method to actually
map the device. Set up the IO regions within fw_cfg_io_realize() and defer
the mapping with sysbus_add_io() to the caller, as already done in
fw_cfg_init_mem_wide().
This makes the iobase and dma_iobase properties now obsolete so they can be
removed.
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Tested-by: Gabriel Somlo <somlo@cmu.edu>
When the function no success value to transmit, it usually make the
function return void. It has turned out not to be a success, because
it means that the extra local_err variable and error_propagate() will
be needed. It leads to cumbersome code, therefore, transmit success/
failure in the return value is worth. So fix the return type to avoid
it.
Cc: pbonzini@redhat.com
Cc: rth@twiddle.net
Cc: ehabkost@redhat.com
Cc: mst@redhat.com
Cc: armbru@redhat.com
Cc: marcel@redhat.com
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
On success, pci_add_capability2() returns a positive value. On
failure, it sets an error and return a negative value.
pci_add_capability() laboriously checks this behavior. No other
caller does. Drop the checks from pci_add_capability().
Cc: mst@redhat.com
Cc: marcel@redhat.com
Signed-off-by: Mao Zhongyi <maozy.fnst@cn.fujitsu.com>
Reviewed-by: Marcel Apfelbaum <marcel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The VT-d spec (section 6.5.2) prescribes software to zero the
Invalidation Queue Tail Register before enabling the VTD_GCMD_QIE
Global Command Register bit. Windows Server 2012 R2 and possibly
other older Windows versions violate the protocol and set a
non-zero queue tail first, which in effect makes them crash early
on boot with -device intel-iommu,intremap=on.
This commit relaxes the check and instead of failing to enable
VTD_GCMD_QIE with vtd_err_qi_enable, it behaves as if the tail
register was set just after enabling VTD_GCMD_QIE
(see vtd_handle_iqt_write).
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
This patch enables the virtio-net tx queue size to be configurable
between 256 (the default queue size) and 1024 by the user when the
vhost-user backend is used.
Currently, the maximum tx queue size for other backends is 512 due
to the following limitations:
- QEMU backend: the QEMU backend implementation in some cases may
send 1024+1 iovs to writev.
- Vhost_net backend: there are possibilities that the guest sends
a vring_desc of memory which crosses a MemoryRegion thereby
generating more than 1024 iovs after translation from guest-physical
address in the backend.
Signed-off-by: Wei Wang <wei.w.wang@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Some code paths can lead to atomic accesses racing with memset()
on cpu->tb_jmp_cache, which can result in torn reads/writes
and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code
inspection they seem possible. For example, tb_phys_invalidate does:
/* remove the TB from the hash list */
h = tb_jmp_cache_hash_func(tb->pc);
CPU_FOREACH(cpu) {
if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) {
atomic_set(&cpu->tb_jmp_cache[h], NULL);
}
}
Here atomic_set might race with a concurrent memset (such as the
ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and
therefore we might end up with a torn pointer (or who knows what,
because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use
atomic primitives, thereby bringing these accesses back to defined
behaviour. The price to pay is to potentially execute more instructions
when clearing cpu->tb_jmp_cache, but given how infrequently they happen
and the small size of the cache, the performance impact I have measured
is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset
because no other vcpus are running. However I'm keeping these accesses
atomic as well to keep things simple and to avoid confusing analysis
tools such as ThreadSanitizer.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1497486973-25845-1-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We are relying on cpu_env being defined as a global, yet most
targets (i.e. all but arm/a64) have it defined as a local variable.
Luckily all of them use the same "cpu_env" name, but really
compilation shouldn't break if the name of that local variable
changed.
Fix it by using tcg_ctx.tcg_env, which all targets set in their
translate_init function. This change also helps paving the way
for the upcoming "translation loop common to all targets" work.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1497639397-19453-3-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
# gpg: Signature made Fri 30 Jun 2017 15:08:45 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/block-pull-request:
block: Exploit BDRV_BLOCK_EOF for larger zero blocks
block: Add BDRV_BLOCK_EOF to bdrv_get_block_status()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
When we have a BDS with unallocated clusters, but asking the status
of its underlying bs->file or backing layer encounters an end-of-file
condition, we know that the rest of the unallocated area will read as
zeroes. However, pre-patch, this required two separate calls to
bdrv_get_block_status(), as the first call stops at the point where
the underlying file ends. Thanks to BDRV_BLOCK_EOF, we can now widen
the results of the primary status if the secondary status already
includes BDRV_BLOCK_ZERO.
In turn, this fixes a TODO mentioned in iotest 154, where we can now
see that all sectors in a partial cluster at the end of a file read
as zero when coupling the shorter backing file's status along with our
knowledge that the remaining sectors came from an unallocated cluster.
Also, note that the loop in bdrv_co_get_block_status_above() had an
inefficent exit: in cases where the active layer sets BDRV_BLOCK_ZERO
but does NOT set BDRV_BLOCK_ALLOCATED (namely, where we know we read
zeroes merely because our unallocated clusters lie beyond the backing
file's shorter length), we still ended up probing the backing layer
even though we already had a good answer.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170505021500.19315-3-eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Just as the block layer already sets BDRV_BLOCK_ALLOCATED as a
shortcut for subsequent operations, there are also some optimizations
that are made easier if we can quickly tell that *pnum will advance
us to the end of a file, via a new BDRV_BLOCK_EOF which gets set
by the block layer.
This just plumbs up the new bit; subsequent patches will make use
of it.
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170505021500.19315-2-eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
ppc patch queue 2017-06-30
* More DRC cleanups, these now actually fix a few bugs
* Properly implements the openpic timers (they now count and
generate interrupts)
* Fixes for XICS migration
* Fixes for migration of POWER9 RPT guests
* The last of the compatibility mode rework
# gpg: Signature made Fri 30 Jun 2017 10:52:25 BST
# gpg: using RSA key 0x6C38CACA20D9B392
# gpg: Good signature from "David Gibson <david@gibson.dropbear.id.au>"
# gpg: aka "David Gibson (Red Hat) <dgibson@redhat.com>"
# gpg: aka "David Gibson (ozlabs.org) <dgibson@ozlabs.org>"
# gpg: aka "David Gibson (kernel.org) <dwg@kernel.org>"
# Primary key fingerprint: 75F4 6586 AE61 A66C C44E 87DC 6C38 CACA 20D9 B392
* remotes/dgibson/tags/ppc-for-2.10-20170630: (21 commits)
spapr: Clean up DRC set_isolation_state() path
spapr: Clean up DRC set_allocation_state path
spapr: Make DRC reset force DRC into known state
spapr: Split DRC release from DRC detach
spapr: Eliminate DRC 'signalled' state variable
spapr: Start hotplugged PCI devices in ISOLATED state
target-ppc: Enable open-pic timers to count and generate interrupts
hw/ppc/spapr.c: consecutive 'spapr->patb_entry = 0' statements
spapr: prevent QEMU crash when CPU realization fails
target/ppc: Proper cleanup when ppc_cpu_realizefn fails
spapr: fix migration of ICPState objects from/to older QEMU
xics: directly register ICPState objects to vmstate
target/ppc: Fix return value in tcg radix mmu fault handler
target/ppc/excp_helper: Take BQL before calling cpu_interrupt()
spapr: Fix migration of Radix guests
spapr: Add a "no HPT" encoding to HTAB migration stream
ppc: Rework CPU compatibility testing across migration
pseries: Reset CPU compatibility mode
pseries: Move CPU compatibility property to machine
qapi: add explicit null to string input and output visitors
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Old kvm.ko versions only supported a tiny number of ioeventfds so
virtio-pci avoids ioeventfds when kvm_has_many_ioeventfds() returns 0.
Do not check kvm_has_many_ioeventfds() when KVM is disabled since it
always returns 0. Since commit 8c56c1a592
("memory: emulate ioeventfd") it has been possible to use ioeventfds in
qtest or TCG mode.
This patch makes -device virtio-blk-pci,iothread=iothread0 work even
when KVM is disabled.
I have tested that virtio-blk-pci works under TCG both with and without
iothread.
This patch fixes qemu-iotests 068, which was accidentally merged early
despite the dependency on ioeventfd.
Cc: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Tested-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20170628184724.21378-7-stefanha@redhat.com
Message-id: 20170615163813.7255-2-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Existing tests do not touch the virtqueue used ring. Instead they poll
the virtqueue ISR register and peek into their request's device-specific
status field.
It turns out that the virtqueue ISR register can be set to 1 more than
once for a single notification (see commit
83d768b564 "virtio: set ISR on dataplane
notifications"). This causes problems for tests that assume a 1:1
correspondence between the ISR being 1 and request completion.
Peeking at device-specific status fields is also problematic if the
device has no field that can be abused for EINPROGRESS polling
semantics. This is the case if all the field's values may be set by the
device; there's no magic constant left for polling.
It's time to process the used ring for completed requests, just like a
real virtio guest driver. This patch adds the necessary APIs.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Tested-by: Eric Blake <eblake@redhat.com>
Tested-by: Kevin Wolf <kwolf@redhat.com>
Message-id: 20170628184724.21378-3-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
There are substantial differences in the various paths through
set_isolation_state(), both for setting to ISOLATED versus UNISOLATED
state and for logical versus physical DRCs.
So, split the set_isolation_state() method into isolate() and unisolate()
methods, and give it different implementations for the two DRC types.
Factor some minimal common checks, including for valid indicator values
(which we weren't previously checking) into rtas_set_isolation_state().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The allocation-state indicator should only actually be implemented for
"logical" DRCs, not physical ones. Factor a check for this, and also for
valid indicator state values into rtas_set_allocation_state(). Because
they don't exist for physical DRCs, there's no reason that we'd ever want
more than one method implementation, so it can just be a plain function.
In addition, the setting to USABLE and setting to UNUSABLE paths in
set_allocation_state() don't actually have much in common. So, split the
method separate functions for each parameter value (drc_set_usable()
and drc_set_unusable()).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The reset handler for DRCs attempts several state transitions which are
subject to various checks and restrictions. But at reset time we know
there is no guest, so we can ignore most of the usual sequencing rules and
just set the DRC back to a known state. In fact, it's safer to do so.
The existing code also has several redundant checks for
drc->awaiting_release inside a block which has already tested that. This
patch removes those and sets the DRC to a fixed initial state based only
on whether a device is currently plugged or not.
With DRCs correctly reset to a state based on device presence, we don't
need to force state transitions as cold plugged devices are processed.
This allows us to remove all the callers of the set_*_state() methods from
outside spapr_drc.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
spapr_drc_detach() is called when qemu generic code requests a device be
unplugged. It makes a number of tests, which could well delay further
action until later, before actually detach the device from the DRC.
This splits out the part which actually removes the device from the DRC
into spapr_drc_release(). This will be useful for further cleanups.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
The 'signalled' field in the DRC appears to be entirely a torturous
workaround for the fact that PCI devices were started in UNISOLATED state
for unclear reasons.
1) 'signalled' is already meaningless for logical (so far, all non PCI)
DRCs. It's always set to true (at least at any point it might be tested),
and can't be assigned any real meaning due to the way signalling works for
logical DRCs.
2) For PCI DRCs, the only time signalled would be false is when non-zero
functions of a multifunction device are hotplugged, followed by function
zero (the other way around is explicitly not permitted). In that case the
secondary function DRCs are attached, but the notification isn't sent to
the guest until function 0 is plugged.
3) signalled being false is used to allow a DRC detach to switch mode
back to ISOLATED state, which allows a secondary function to be hotplugged
then unplugged with function 0 never inserted. Without this a secondary
function starting in UNISOLATED state couldn't be detached again without
function 0 being inserted, all the functions configured by the guest, then
sent back to ISOLATED state.
4) But now that PCI DRCs start in ISOLATED state, there's nothing to be
done. If the guest doesn't get the notification, it won't switch the
device to UNISOLATED state, so nothing prevents it from being unplugged.
If the guest does move it to UNISOLATED state without the signal (due to
a manual drmgr call, for instance) then it really isn't safe to unplug it.
So, this patch removes the signalled variable and all code related to it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
PCI DRCs, and only PCI DRCs, are immediately moved to UNISOLATED isolation
state once the device is attached. This has been there from the initial
implementation, and it's not clear why.
The state diagram in PAPR 13.4 suggests PCI devices should start in
ISOLATED state until the guest moves them into UNISOLATED, and the code in
the guest-side drmgr tool seems to work that way too.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Previously QEMU open-pic implemented the 4 open-pic timers including
all timer registers, but the timers did not "count" or generate any
interrupts. The patch makes the timers both count and generate
interrupts. The timer clock frequency is fixed at 25MHZ.
--
Responding to V2 patch comments.
- Simplify clock frequency logic and commentary.
- Remove camelCase variables.
- Timer objects now created at init rather than lazily.
Signed-off-by: Aaron Larson <alarson@ddci.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
In ppc_spapr_reset(), if the guest is using HPT, the code was executing:
} else {
spapr->patb_entry = 0;
spapr_setup_hpt_and_vrma(spapr);
}
And, at the end of spapr_setup_hpt_and_vrma:
/* We're setting up a hash table, so that means we're not radix */
spapr->patb_entry = 0;
Resulting in spapr->patb_entry being assigned to 0 twice in a row.
Given that 'spapr_setup_hpt_and_vrma' is also called inside
'spapr_check_setup_free_hpt' of spapr_hcall.c, this trivial patch removes
the 'patb_entry = 0' assignment from the 'else' clause inside ppc_spapr_reset
to avoid this behavior.
Signed-off-by: Daniel Henrique Barboza <danielhb@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
ICPState objects were being allocated before CPU thread realization.
However commit 9ed656631d (xics: setup cpu at realize time) reversed it
by allocating ICPState objects after CPU thread is realized. But it
didn't take care to fix the error path because of which we observe
a SIGSEGV when CPU thread realization fails during cold/hotplug.
Fix this by ensuring that we do object_unparent() of ICPState object
only in case when is was created earlier.
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
If ppc_cpu_realizefn() fails after cpu_exec_realizefn() has been
called, we will have to undo whatever cpu_exec_realizefn() did
by explicitly calling cpu_exec_unrealizeffn() which is currently
missing. Failure to do this proper cleanup will result in CPU
which was never fully realized to linger on the cpus list causing
SIGSEGV later (for eg when running "info cpus").
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Commit 5bc8d26de2 ("spapr: allocate the ICPState object from under
sPAPRCPUCore") moved ICPState objects from the machine to CPU cores.
This is an improvement since we no longer allocate ICPState objects
that will never be used. But it has the side-effect of breaking
migration of older machine types from older QEMU versions.
This patch allows spapr to register dummy "icp/server" entries to vmstate.
These entries use a dedicated VMStateDescription that can swallow and
discard state of an incoming migration stream, and that don't send anything
on outgoing migration.
As for real ICPState objects, the instance_id is the cpu_index of the
corresponding vCPU, which happens to be equal to the generated instance_id
of older machine types.
The machine can unregister/register these entries when CPUs are dynamically
plugged/unplugged.
This is only available for pseries-2.9 and older machines, thanks to a
compat property.
Signed-off-by: Greg Kurz <groug@kaod.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The ICPState objects are currently registered to vmstate as qdev objects.
Their instance ids are hence computed automatically in the migration code,
and thus depends on the order the CPU cores were plugged.
If the destination had its CPU cores plugged in a different order than the
source, then ICPState objects will have different instance_ids and load
the wrong state.
Since CPU objects have a reliable cpu_index which is already used as
instance_id in vmstate, let's use it for ICPState as well.
Please note that this doesn't break migration. Older machine types used to
allocate and realize all ICPState objects at machine init time, for the whole
lifetime of the machine. The qdev instance ids are thus 0,1,2... nr_servers
and happen to map to the vCPU indexes.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Laurent Vivier <lvivier@redhat.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
The mmu fault handler should return 0 if it was able to successfully
handle the fault and a positive value otherwise.
Currently the tcg radix mmu fault handler will return 1 after
successfully handling a fault in virtual mode. This is incorrect
so fix it so that it returns 0 in this case.
The handler already correctly returns 0 when a fault was handled
in real mode and 1 if an interrupt was generated.
Fixes: d5fee0bbe6 ("target/ppc: Implement ISA V3.00 radix page fault handler")
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Add a "no HPT" encoding (using value -1) to the HTAB migration
stream (in the place of HPT size) when the guest doesn't allocate HPT.
This will help the target side to match target HPT with the source HPT
and thus enable successful migration.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Migrating between different CPU versions is a bit complicated for ppc.
A long time ago, we ensured identical CPU versions at either end by
checking the PVR had the same value. However, this breaks under KVM
HV, because we always have to use the host's PVR - it's not
virtualized. That would mean we couldn't migrate between hosts with
different PVRs, even if the CPUs are close enough to compatible in
practice (sometimes identical cores with different surrounding logic
have different PVRs, so this happens in practice quite often).
So, we removed the PVR check, but instead checked that several flags
indicating supported instructions matched. This turns out to be a bad
idea, because those instruction masks are not architected information, but
essentially a TCG implementation detail. So changes to qemu internal CPU
modelling can break migration - this happened between qemu-2.6 and
qemu-2.7. That was addressed by 146c11f1 "target-ppc: Allow eventual
removal of old migration mistakes".
Now, verification of CPU compatibility across a migration basically doesn't
happen. We simply ignore the PVR of the incoming migration, and hope the
cpu on the destination is close enough to work.
Now that we've cleaned up handling of processor compatibility modes
for pseries machine type, we can do better. For new machine types
(pseries-2.10+) We allow migration if:
* The source and destination PVRs are for the same type of CPU, as
determined by CPU class's pvr_match function
OR * When the source was in a compatibility mode, and the destination CPU
supports the same compatibility mode
For older machine types we retain the existing behaviour - current CAS
code will usually set a compat mode which would break backwards
migration if we made them use the new behaviour. [Fixed from an
earlier version by Greg Kurz].
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Currently, the CPU compatibility mode is set when the cpu is initialized,
then again when the guest negotiates features. This means if a guest
negotiates a compatibility mode, then reboots, that compatibility mode
will be retained across the reset.
Usually that will get overridden when features are negotiated on the next
boot, but it's still not really correct. This patch moves the initial set
up of the compatibility mode from cpu init to reset time. The mode *is*
retained if the reboot was caused by the feature negotiation (it might
be important in that case, though it's unlikely).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Server class POWER CPUs have a "compat" property, which is used to set the
backwards compatibility mode for the processor. However, this only makes
sense for machine types which don't give the guest access to hypervisor
privilege - otherwise the compatibility level is under the guest's control.
To reflect this, this removes the CPU 'compat' property and instead
creates a 'max-cpu-compat' property on the pseries machine. Strictly
speaking this breaks compatibility, but AFAIK the 'compat' option was
never (directly) used with -device or device_add.
The option was used with -cpu. So, to maintain compatibility, this
patch adds a hack to the cpu option parsing to strip out any compat
options supplied with -cpu and set them on the machine property
instead of the now deprecated cpu property.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Tested-by: Greg Kurz <groug@kaod.org>
Tested-by: Andrea Bolognani <abologna@redhat.com>
When using the 40p machine, soundhw_init() is currently called twice,
one time from vl.c and one time from ibm_40p_init(). The call in
ibm_40p_init() was likely just a copy-and-paste from a old version
of the prep machine - but there the call to audio_init() (which was
the previous name of this function) has been removed many years ago
already, with commit b3e6d591b0
("audio: enable PCI audio cards for all PCI-enabled targets"), so
we certainly also do not need the soundhw_init() in the 40p function
anymore nowadays.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Reviewed-by: Sahid Ferdjaoui <sferdjao@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Hervé Poussineau <hpoussin@reactos.org>
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
HMP pull 2017-06-29
# gpg: Signature made Thu 29 Jun 2017 17:27:55 BST
# gpg: using RSA key 0x0516331EBC5BFDE7
# gpg: Good signature from "Dr. David Alan Gilbert (RH2) <dgilbert@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 45F5 C71B 4A0C B7FB 977A 9FA9 0516 331E BC5B FDE7
* remotes/dgilbert/tags/pull-hmp-20170629:
Add chardev-send-break monitor command
monitor: Add -a (all) option to info registers
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Sending a break on a serial console can be useful for debugging the
guest. But not all chardev backends support sending breaks (only telnet
and mux do). The chardev-send-break command allows to send a break even
if using other backends.
Signed-off-by: Stefan Fritsch <sf@sfritsch.de>
Acked-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20170611074817.13621-1-sf@sfritsch.de>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Use 'send a break' in all 3 pieces of text as suggested by eblake
The info registers command in the qemu monitor is used to dump register
values.
Currently this command uses the monitor cpu (which can be set by the
user) as the cpu for whose registers will be dumped. Sometimes it is
useful to see the registers for all cpus and currently this requires
setting the monitor cpu and the re-running the command for each cpu
in the system. I would be nice if there was an easier way to do this.
Add the "-a" option to the info registers command to dump the register
values for all cpus.
Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com>
Message-Id: <20170608054116.17203-1-sjitindarsingh@gmail.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
- fixes a minor bug that could possibly prevent old guests to remove
directories
- makes default permissions for new files configurable from the cmdline
when using mapped security modes
- handle transport errors
- g_malloc()+memcpy() converted to g_memdup()
# gpg: Signature made Thu 29 Jun 2017 14:12:42 BST
# gpg: using DSA key 0x02FC3AEB0101DBC2
# gpg: Good signature from "Greg Kurz <groug@kaod.org>"
# gpg: aka "Greg Kurz <groug@free.fr>"
# gpg: aka "Greg Kurz <gkurz@linux.vnet.ibm.com>"
# gpg: aka "Gregory Kurz (Groug) <groug@free.fr>"
# gpg: aka "[jpeg image of size 3330]"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 2BD4 3B44 535E C0A7 9894 DBA2 02FC 3AEB 0101 DBC2
* remotes/gkurz/tags/for-upstream:
9pfs: handle transport errors in pdu_complete()
xen-9pfs: disconnect if buffers are misconfigured
virtio-9p: break device if buffers are misconfigured
virtio-9p: message header is 7-byte long
virtio-9p: record element after sanity checks
9pfs: replace g_malloc()+memcpy() with g_memdup()
9pfs: local: Add support for custom fmode/dmode in 9ps mapped security modes
9pfs: local: remove: use correct path component
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The [NSEvent modifierFlags] method returns an NSEventModifierFlags type value in Mac OS 10.10. It use to be of type NSUInteger. Replacing NSEventModifierFlags with NSUInteger allows for the cooca.m file to be compiled on older versions of Mac OS. This patch was been tested on Mac OS 10.6 and Mac OS 10.12 without problem.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Message-id: F6C36C1A-4661-48F4-BEA6-3994889927D0@gmail.com
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
It is hard to analyze trace logs with multiple virtio-blk devices
because none of the trace events include the VirtIODevice *vdev.
This patch adds vdev so it's clear which device a request is associated
with.
I considered using VirtIOBlock *s instead but VirtIODevice *vdev is more
general and may be correlated with generic virtio trace events like
virtio_set_status.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Message-id: 20170614092930.11234-1-stefanha@redhat.com
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Contrary to what is written in the comment, a buggy guest can misconfigure
the transport buffers and pdu_marshal() may return an error. If this ever
happens, it is up to the transport layer to handle the situation (9P is
transport agnostic).
This fixes Coverity issue CID1348518.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Implement xen_9pfs_disconnect by unbinding the event channels. On
xen_9pfs_free, call disconnect if any event channels haven't been
disconnected.
If the frontend misconfigured the buffers set the backend to "Closing"
and disconnect it. Misconfigurations include requesting a read of more
bytes than available on the ring buffer, or claiming to be writing more
data than available on the ring buffer.
Signed-off-by: Stefano Stabellini <stefano@aporeto.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
The 9P protocol is transport agnostic: if the guest misconfigured the
buffers, the best we can do is to set the broken flag on the device.
Signed-off-by: Greg Kurz <groug@kaod.org>
The 9p spec at http://man.cat-v.org/plan_9/5/intro reads:
"Each 9P message begins with a four-byte size field specify-
ing the length in bytes of the complete message including
the four bytes of the size field itself. The next byte is
the message type, one of the constants in the enumeration in
the include file <fcall.h>. The next two bytes are an iden-
tifying tag, described below."
ie, each message starts with a 7-byte long header.
The core 9P code already assumes this pretty much everywhere. This patch
does the following:
- makes the assumption explicit in the common 9p.h header, since it isn't
related to the transport
- open codes the header size in handle_9p_output() and hardens the sanity
check on the space needed for the reply message
Signed-off-by: Greg Kurz <groug@kaod.org>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
If the guest sends a malformed request, we end up with a dangling pointer
in V9fsVirtioState. This doesn't seem to cause any bug, but let's remove
this side effect anyway.
Signed-off-by: Greg Kurz <groug@kaod.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
I found these pattern via grepping the source tree. I don't have a
coccinelle script for it!
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
In mapped security modes, files are created with very restrictive
permissions (600 for files and 700 for directories). This makes
file sharing between virtual machines and users on the host rather
complicated. Imagine eg. a group of users that need to access data
produced by processes on a virtual machine. Giving those users access
to the data will be difficult since the group access mode is always 0.
This patch makes the default mode for both files and directories
configurable. Existing setups that don't know about the new parameters
keep using the current secure behavior.
Signed-off-by: Tobias Schramm <tobleminer@gmail.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Commit a0e640a8 introduced a path processing error.
Pass fstatat the dirpath based path component instead
of the entire path.
Signed-off-by: Bruce Rogers <brogers@suse.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
migration/next for 20170628
# gpg: Signature made Wed 28 Jun 2017 12:16:44 BST
# gpg: using RSA key 0xF487EF185872D723
# gpg: Good signature from "Juan Quintela <quintela@redhat.com>"
# gpg: aka "Juan Quintela <quintela@trasno.org>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 1899 FF8E DEBF 58CC EE03 4B82 F487 EF18 5872 D723
* remotes/juanquintela/tags/migration/20170628:
exec: fix access to ram_list.dirty_memory when sync dirty bitmap
migration: add "return-path" capability
vmstate: error hint for failed equal checks
migration: add comment for TYPE_MIGRATE
migration: hmp: dump globals
migration: merge enforce_config_section somewhat
migration: move skip_section_footers
migration: move skip_configuration out
migration: move only_migratable to MigrationState
migration: move global_state.optional out
migration: let MigrationState be a qdev
vl: clean up global property registration
accel: introduce AccelClass.global_props
machine: export register_compat_prop()
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Xen 2017/06/27
# gpg: Signature made Tue 27 Jun 2017 23:02:43 BST
# gpg: using RSA key 0x894F8F4870E1AE90
# gpg: Good signature from "Stefano Stabellini <stefano.stabellini@eu.citrix.com>"
# gpg: aka "Stefano Stabellini <sstabellini@kernel.org>"
# Primary key fingerprint: D04E 33AB A51F 67BA 07D3 0AEA 894F 8F48 70E1 AE90
* remotes/sstabellini/tags/xen-20170627-tag:
xen-disk: add support for multi-page shared rings
xen-disk: only advertize feature-persistent if grant copy is not available
xen/disk: don't leak stack data via response ring
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
The 32-bit PPC auxv is a bit complicated because in the
mists of time it used to be 16-aligned rather than directly
after the environment. Older glibc versions had code to
try to probe for whether it needed alignment or not:
https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c;hb=e84eabb3871c9b39e59323bf3f6b98c2ca9d1cd0
and the kernel has code which puts some magic entries at
the bottom to ensure that the alignment probe fails:
http://elixir.free-electrons.com/linux/latest/source/arch/powerpc/include/asm/elf.h#L158
QEMU has similar code too, but it was broken by commit
7c4ee5bcc8, which changed elfload.c from filling in
the auxv starting at the highest address and working down
to starting at the lowest address and working up. This
means that the ARCH_DLINFO hook must now be invoked first
rather than last, and the entries in it for PPC must
be reversed so that the magic AT_IGNOREPPC entries come
at the lowest address in the auxv as they should.
The effect of this was that if running a guest binary that
used an old glibc with the alignment probing the guest ld.so
code would segfault if the size of the guest environment and
argv happened to put the auxv at an address that triggered
the alignment code in the guest glibc.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Tested-by: Richard Henderson <rth@twiddle.net>
Message-id: 1498582198-6649-1-git-send-email-peter.maydell@linaro.org
In cpu_physical_memory_sync_dirty_bitmap(rb, start, ...), the 2nd
argument 'start' is relative to the start of the ramblock 'rb'. When
it's used to access the dirty memory bitmap of ram_list (i.e.
ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks[]), an offset to
the start of all RAM (i.e. rb->offset) should be added to it, which has
however been missed since c/s 6b6712efcc. For a ramblock of host memory
backend whose offset is not zero, cpu_physical_memory_sync_dirty_bitmap()
synchronizes the incorrect part of the dirty memory bitmap of ram_list
to the per ramblock dirty bitmap. As a result, a guest with host
memory backend may crash after migration.
Fix it by adding the offset of ramblock when accessing the dirty memory
bitmap of ram_list in cpu_physical_memory_sync_dirty_bitmap().
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Message-Id: <20170628083704.24997-1-haozhong.zhang@intel.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Tested-by: Juan Quintela <quintela@redhat.com>
Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
When this capability is enabled, QEMU will use the return path even for
precopy migration. This is helpful at least in one case when destination
failed to load the image while source quited without confirmation. With
return path, source will wait for the last response from destination,
and if destination fails, it'll fail the migration on source, then the
guest can be run again on the source (rather than assuming to be good,
then the guest will be lost after source quits).
It needs to be enabled explicitly on source, otherwise disabled.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498472935-14461-1-git-send-email-peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
In some cases a failing VMSTATE_*_EQUAL does not mean we detected a bug,
but it's actually the best we can do. Especially in these cases a verbose
error message is required.
Let's introduce infrastructure for specifying a error hint to be used if
equal check fails. Let's do this by adding a parameter to the _EQUAL
macros called _err_hint. Also change all current users to pass NULL as
last parameter so nothing changes for them.
Signed-off-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Message-Id: <20170623144823.42936-1-pasic@linux.vnet.ibm.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Now we have some globals that can be configured for migration. Dump them
in HMP info migration for better debugging.
(we can also use this to monitor whether COMPAT fields are applied
correctly on compatible machines)
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-11-git-send-email-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
These two parameters:
- MachineState::enforce_config_section
- MigrationState::send_configuration
are playing similar role here. This patch merges the first one into
second, then we'll have a single place to reference whether we need to
send the configuration section.
I didn't remove the MachineState.enforce_config_section field since when
applying that machine property (in machine_set_property()) we haven't
yet initialized global properties and migration object. Then, it's
still not easy to pass that boolean to MigrationState at such an early
time.
A natural benefit for current patch is that now we kept the meaning of
"enforce-config-section" since it'll still have the highest
priority (that's what "enforce" mean I guess).
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-10-git-send-email-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
It was in SaveState but now moved to MigrationState altogether, reverted
its meaning, then renamed to "send_configuration". Again, using
HW_COMPAT_2_3 for old PC/SPAPR machines, and accel_register_prop() for
xen_init().
Removing savevm_skip_configuration().
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-8-git-send-email-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
One less global variable, and it does only matter with migration.
We keep the old "--only-migratable" option, but also now we support:
-global migration.only-migratable=true
Currently still keep the old interface.
Hmm, now vl.c has no way to access migrate_get_current(). Export a
function for it to setup only_migratable.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-7-git-send-email-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Put it into MigrationState then we can use the properties to specify
whether to enable storing global state.
Removing global_state_set_optional() since now we can use HW_COMPAT_2_3
for x86/power, and AccelClass.global_props for Xen.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-6-git-send-email-peterx@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Let the old man "MigrationState" join the object family. Direct benefit
is that we can start to use all the property features derived from
current QDev, like: HW_COMPAT_* bits, command line setup for migration
parameters (so will never need to set them up each time using HMP/QMP,
this is really, really attractive for test writters), etc.
I see no reason to disallow this happen yet. So let's start from this
one, to see whether it would be anything good.
Now we init the MigrationState struct statically in main() to make sure
it's initialized after global properties are applied, since we'll use
them during creation of the object.
No functional change at all.
Reviewed-by: Juan Quintela <quintela@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-5-git-send-email-peterx@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
It's not that clear on how the global properties are registered to
global_props (and also its priority relationship). Let's provide a
single function to be called in main() for that, with comment to explain
it a bit.
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-4-git-send-email-peterx@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Introduce this new field for the accelerator classes so that each
specific accelerator in the future can register its own global
properties to be used further by the system. It works just like how the
old machine compatible properties do, but only tailored for
accelerators.
Introduce register_compat_props_array() for it. Export it so that it may
be used in other codes as well in the future.
Suggested-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-3-git-send-email-peterx@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
We have HW_COMPAT_*, however that's only bound to machines, not other
things (like accelerators). Behind it, it was register_compat_prop()
that played the trick. Let's export the function for further use
outside HW_COMPAT_* magic.
Meanwhile, move it to qdev-properties.c where seems more proper (since
it'll be used not only in machine codes).
Signed-off-by: Peter Xu <peterx@redhat.com>
Message-Id: <1498536619-14548-2-git-send-email-peterx@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
The blkif protocol has had provision for negotiation of multi-page shared
rings for some time now and many guest OS have support in their frontend
drivers.
This patch makes the necessary modifications to xen-disk support a shared
ring up to order 4 (i.e. 16 pages).
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
If grant copy is available then it will always be used in preference to
persistent maps. In this case feature-persistent should not be advertized
to the frontend, otherwise it may needlessly copy data into persistently
granted buffers.
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Rather than constructing a local structure instance on the stack, fill
the fields directly on the shared ring, just like other (Linux)
backends do. Build on the fact that all response structure flavors are
actually identical (aside from alignment and padding at the end).
This is XSA-216.
Reported by: Anthony Perard <anthony.perard@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Anthony PERARD <anthony.perard@citrix.com>
edgar/mmio-exec-v2.for-upstream
# gpg: Signature made Tue 27 Jun 2017 16:22:30 BST
# gpg: using RSA key 0x29C596780F6BCA83
# gpg: Good signature from "Edgar E. Iglesias (Xilinx key) <edgar.iglesias@xilinx.com>"
# gpg: aka "Edgar E. Iglesias <edgar.iglesias@gmail.com>"
# Primary key fingerprint: AC44 FEDC 14F7 F1EB EDBF 4151 29C5 9678 0F6B CA83
* remotes/edgar/tags/edgar/mmio-exec-v2.for-upstream:
xilinx_spips: allow mmio execution
exec: allow to get a pointer for some mmio memory region
introduce mmio_interface
qdev: add MemoryRegion property
cputlb: fix the way get_page_addr_code fills the tlb
cputlb: move get_page_addr_code
cputlb: cleanup get_page_addr_code to use VICTIM_TLB_HIT
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
This allows to execute from the lqspi area.
When the request_ptr is called the device loads 1024bytes from the SPI device.
Then this code can be executed by the guest.
Tested-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
This introduces a special callback which allows to run code from some MMIO
devices.
SysBusDevice with a MemoryRegion which implements the request_ptr callback will
be notified when the guest try to execute code from their offset. Then it will
be able to eg: pre-load some code from an SPI device or ask a pointer from an
external simulator, etc..
When the pointer or the data in it are no longer valid the device has to
invalidate it.
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
get_page_addr_code(..) does a cpu_ldub_code to fill the tlb:
This can lead to some side effects if a device is mapped at this address.
So this patch replaces the cpu_memory_ld by a tlb_fill.
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Signed-off-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
Block layer patches
# gpg: Signature made Mon 26 Jun 2017 14:07:32 BST
# gpg: using RSA key 0x7F09B272C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"
# Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kevin/tags/for-upstream: (60 commits)
qemu-img: don't shadow opts variable in img_dd()
block: Do not strcmp() with NULL uri->scheme
blkverify: Catch bs->exact_filename overflow
blkdebug: Catch bs->exact_filename overflow
fix: avoid an infinite loop or a dangling pointer problem in img_commit
block: change variable names in BlockDriverState
block: Remove bdrv_aio_readv/writev/flush()
qed: Use bdrv_co_* for coroutine_fns
qed: Add coroutine_fn to I/O path functions
qed: Use a coroutine for need_check_timer
qed: Simplify request handling
qed: Use CoQueue for serialising allocations
qed: Implement .bdrv_co_readv/writev
qed: Remove recursion in qed_aio_next_io()
qed: Remove ret argument from qed_aio_next_io()
qed: Add return value to qed_aio_read/write_data()
qed: Add return value to qed_aio_write_inplace/alloc()
qed: Add return value to qed_aio_write_cow()
qed: Add return value to qed_aio_write_main()
qed: Add return value to qed_aio_write_l2_update()
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Block patches for the block queue
# gpg: Signature made Mon Jun 26 14:56:24 2017 CEST
# gpg: using RSA key 0xF407DB0061D5CF40
# gpg: Good signature from "Max Reitz <mreitz@redhat.com>"
# Primary key fingerprint: 91BE B60A 30DB 3E88 57D1 1829 F407 DB00 61D5 CF40
* mreitz/tags/pull-block-2017-06-26:
qemu-img: don't shadow opts variable in img_dd()
block: Do not strcmp() with NULL uri->scheme
blkverify: Catch bs->exact_filename overflow
blkdebug: Catch bs->exact_filename overflow
fix: avoid an infinite loop or a dangling pointer problem in img_commit
block: change variable names in BlockDriverState
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
uri_parse(...)->scheme may be NULL. In fact, probably every field may be
NULL, and the callers do test this for all of the other fields but not
for scheme (except for block/gluster.c; block/vxhs.c does not access
that field at all).
We can easily fix this by using g_strcmp0() instead of strcmp().
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20170613205726.13544-1-mreitz@redhat.com
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
All functions that are marked coroutine_fn can directly call the
bdrv_co_* version of functions instead of going through the wrapper.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that we stay in coroutine context for the whole request when doing
reads or writes, we can add coroutine_fn annotations to many functions
that can do I/O or yield directly.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
This fixes the last place where we degraded from AIO to actual blocking
synchronous I/O requests. Putting it into a coroutine means that instead
of blocking, the coroutine simply yields while doing I/O.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that we process a request in the same coroutine from beginning to
end and don't drop out of it any more, we can look like a proper
coroutine-based driver and simply call qed_aio_next_io() and get a
return value from it instead of spawning an additional coroutine that
reenters the parent when it's done.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Now that we're running in coroutine context, the ad-hoc serialisation
code (which drops a request that has to wait out of coroutine context)
can be replaced by a CoQueue.
This means that when we resume a serialised request, it is running in
coroutine context again and its I/O isn't blocking any more.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Most of the qed code is now synchronous and matches the coroutine model.
One notable exception is the serialisation between requests which can
still schedule a callback. Before we can replace this with coroutine
locks, let's convert the driver's external interfaces to the coroutine
versions.
We need to be careful to handle both requests that call the completion
callback directly from the calling coroutine (i.e. fully synchronous
code) and requests that involve some callback, so that we need to yield
and wait for the completion callback coming from outside the coroutine.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Manos Pitsidianakis <el13635@mail.ntua.gr>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of calling itself recursively as the last thing, just convert
qed_aio_next_io() into a loop.
This patch is best reviewed with 'git show -w' because most of it is
just whitespace changes.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
While refactoring qed_aio_write_alloc() to accomodate the change,
qed_aio_write_zero_cluster() ended up with a single line, so I chose to
inline that line and remove the function completely.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Don't recurse into qed_aio_next_io() and qed_aio_complete() here, but
just return an error code and let the caller handle it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
qed_commit_l2_update() is unconditionally called at the end of
qed_aio_write_l1_update(). Inline it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The GenericCB infrastructure isn't used any more. Remove it.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
With this change, qed_aio_write_prefill() and qed_aio_write_postfill()
collapse into a single function. This is reflected by a rename of the
combined function to qed_aio_write_cow().
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Instead of passing the return value to a callback, return it to the
caller so that the callback can be inlined there.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Note that this code is generally not running in coroutine context, so
this is an actual blocking synchronous operation. We'll fix this in a
moment.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
The qed driver serialises allocating write requests. When the active
allocation is finished, the AIO callback is called, but after this, the
next allocating request is immediately processed instead of leaving the
coroutine. Resuming another allocation request in the same request
coroutine means that the request now runs in the wrong coroutine.
The following is one of the possible effects of this: The completed
request will generally reenter its request coroutine in a bottom half,
expecting that it completes the request in bdrv_driver_pwritev().
However, if the second request actually yielded before leaving the
coroutine, the reused request coroutine is in an entirely different
place and is reentered prematurely. Not a good idea.
Let's make sure that we exit the coroutine after completing the first
request by resuming the next allocating request only with a bottom
half.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
We already have functions for doing these calculations, so let's use
them instead of doing everything by hand. This makes the code a bit
more readable.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
If the guest tries to write data that results on the allocation of a
new cluster, instead of writing the guest data first and then the data
from the COW regions, write everything together using one single I/O
operation.
This can improve the write performance by 25% or more, depending on
several factors such as the media type, the cluster size and the I/O
request size.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of passing a single buffer pointer to do_perform_cow_write(),
pass a QEMUIOVector. This will allow us to merge the write requests
for the COW regions and the actual data into a single one.
Although do_perform_cow_read() does not strictly need to change its
API, we're doing it here as well for consistency.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reading both COW regions requires two separate requests, but it's
perfectly possible to merge them and perform only one. This generally
improves performance, particularly on rotating disk drives. The
downside is that the data in the middle region is read but discarded.
This patch takes a conservative approach and only merges reads when
the size of the middle region is <= 16KB.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This patch splits do_perform_cow() into three separate functions to
read, encrypt and write the COW regions.
perform_cow() can now read both regions first, then encrypt them and
finally write them to disk. The memory allocation is also done in
this function now, using one single buffer large enough to hold both
regions.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Instead of calling perform_cow() twice with a different COW region
each time, call it just once and make perform_cow() handle both
regions.
This patch simply moves code around. The next one will do the actual
reordering of the COW operations.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Qcow2COWRegion has two attributes:
- The offset of the COW region from the start of the first cluster
touched by the I/O request. Since it's always going to be positive
and the maximum request size is at most INT_MAX, we can use a
regular unsigned int to store this offset.
- The size of the COW region in bytes. This is guaranteed to be >= 0,
so we should use an unsigned type instead.
In x86_64 this reduces the size of Qcow2COWRegion from 16 to 8 bytes.
It will also help keep some assertions simpler now that we know that
there are no negative numbers.
The prototype of do_perform_cow() is also updated to reflect these
changes.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
We are using the return value of qcow2_encrypt_sectors() to detect
problems but we are throwing away the returned Error since we have no
way to report it to the user. Therefore we can simply get rid of the
local Error variable and pass NULL instead.
Alternatively we could try to figure out a way to pass the original
error instead of simply returning -EIO, but that would be more
invasive, so let's keep the current approach.
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Add the ability for the NVMe model to support both the RDS and WDS
modes in the Controller Memory Buffer.
Although not currently supported in the upstreamed Linux kernel a fork
with support exists [1] and user-space test programs that build on
this also exist [2].
Useful for testing CMB functionality in preperation for real CMB
enabled NVMe devices (coming soon).
[1] https://github.com/sbates130272/linux-p2pmem
[2] https://github.com/sbates130272/p2pmem-test
Signed-off-by: Stephen Bates <sbates@raithlin.com>
Reviewed-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Perform the savevm/loadvm test with both iothread on and off. This
covers the recently found savevm/loadvm hang when iothread is enabled.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
The legacy -hda option does not support -drive/-device parameters. They
will be required by the next patch that extends this test case.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
migration_incoming_state_destroy() uses qemu_fclose() on the vmstate
file. Make sure to call it inside an AioContext acquire/release region.
This fixes an 'qemu: qemu_mutex_unlock: Operation not permitted' abort
in loadvm.
This patch closes the vmstate file before ending the drained region.
Previously we closed the vmstate file after ending the drained region.
The order does not matter.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
There used to be throttle_timers_{detach,attach}_aio_context() calls
in bdrv_set_aio_context(), but since 7ca7f0f6db
they are now in blk_set_aio_context().
Signed-off-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
This documents the driver-specific options for the raw, qcow2 and file
block drivers for the man page. For everything else, we refer to the
QAPI documentation.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
This adds documentation for the -blockdev options that apply to all
nodes independent of the block driver used.
All options that are shared by -blockdev and -drive are now explained in
the section for -blockdev. The documentation of -drive mentions that all
-blockdev options are accepted as well.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
blk/bdrv_drain_all() only takes effect for a single instant and then
resumes block jobs, guest devices, and other external clients like the
NBD server. This can be handy when performing a synchronous drain
before terminating the program, for example.
Monitor commands usually need to quiesce I/O across an entire code
region so blk/bdrv_drain_all() is not suitable. They must use
bdrv_drain_all_begin/end() to mark the region. This prevents new I/O
requests from slipping in or worse - block jobs completing and modifying
the graph.
I audited other blk/bdrv_drain_all() callers but did not find anything
that needs a similar fix. This patch fixes the savevm/loadvm commands.
Although I haven't encountered a read world issue this makes the code
safer.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
AioContext was designed to allow nested acquire/release calls. It uses
a recursive mutex so callers don't need to worry about nesting...or so
we thought.
BDRV_POLL_WHILE() is used to wait for block I/O requests. It releases
the AioContext temporarily around aio_poll(). This gives IOThreads a
chance to acquire the AioContext to process I/O completions.
It turns out that recursive locking and BDRV_POLL_WHILE() don't mix.
BDRV_POLL_WHILE() only releases the AioContext once, so the IOThread
will not be able to acquire the AioContext if it was acquired
multiple times.
Instead of trying to release AioContext n times in BDRV_POLL_WHILE(),
this patch simply avoids nested locking in save_vmstate(). It's the
simplest fix and we should step back to consider the big picture with
all the recent changes to block layer threading.
This patch is the final fix to solve 'savevm' hanging with -object
iothread.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Calling aio_poll() directly may have been fine previously, but this is
the future, man! The difference between an aio_poll() loop and
BDRV_POLL_WHILE() is that BDRV_POLL_WHILE() releases the AioContext
around aio_poll().
This allows the IOThread to run fd handlers or BHs to complete the
request. Failure to release the AioContext causes deadlocks.
Using BDRV_POLL_WHILE() partially fixes a 'savevm' hang with -object
iothread.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Call bdrv_inc/dec_in_flight() for vmstate reads/writes. This seems
unnecessary at first glance because vmstate reads/writes are done
synchronously while the guest is stopped. But we need the bdrv_wakeup()
in bdrv_dec_in_flight() so the main loop sees request completion.
Besides, it's cleaner to count vmstate reads/writes like ordinary
read/write requests.
The bdrv_wakeup() partially fixes a 'savevm' hang with -object iothread.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
When qemu is exited, all running jobs should be cancelled successfully.
This adds a test for this for all types of block jobs that currently
exist in qemu.
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
After _cleanup_qemu(), test cases should be able to start the next qemu
process and call _cleanup_qemu() for that one as well. For this to work
cleanly, we need to improve the cleanup so that the second invocation
doesn't try to kill the qemu instances from the first invocation a
second time (which would result in error messages).
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
commit_complete() can't assume that after its block_job_completed() the
job is actually immediately freed; someone else may still be holding
references. In this case, the op blockers on the intermediate nodes make
the graph reconfiguration in the completion code fail.
Call block_job_remove_all_bdrv() manually so that we know for sure that
any blockers on intermediate nodes are given up.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
We want the wide character functions from the ncurses header.
Unfortunately it doesn't provide them by default, but only
if either:
* NCURSES_WIDECHAR is defined (for ncurses 20111030 and up)
* _XOPEN_SOURCE/_XOPEN_SOURCE_EXTENDED are suitably defined
So far we have been implicitly relying on the latter, because
for GNU libc when we define _GNU_SOURCE this causes libc
to define the _XOPEN_SOURCE macros for us. Unfortunately
this doesn't work on all libcs, because some (like OSX and
musl libc) do not define _XOPEN_SOURCE when _GNU_SOURCE
is defined.
We can't fix this by defining _XOPEN_SOURCE ourselves, because
that also means "and don't provide any functions that aren't in
that standard", and not all libcs provide any way to override
that to also get the non-standard functions. In particular
FreeBSD has no such mechanism, and OSX's _DARWIN_C_SOURCE
doesn't reenable everything (for instance getpagesize()
is still not prototyped if _DARWIN_C_SOURCE and _XOPEN_SOURCE
are both defined).
So we have to define NCURSES_WIDECHAR. (This will only work
if your ncurses is at least 20111030, as older versions
don't honour this macro.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Acked-by: Laszlo Ersek <lersek@redhat.com>
Message-id: 1496414138-7622-1-git-send-email-peter.maydell@linaro.org
Let's keep it very simple for now and flush the complete tlb,
we currently can't find the right entries in our tlb, we would have
to store the used tables for each element.
As we now fully implement the DAT-enhancement facility, we can allow to
enable it for the qemu CPU model.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170622094151.28633-4-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Most of the PSW bits that were being copied into TB->flags
are not relevant to translation. Removing those that are
unnecessary reduces the amount of translation required.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Missed the proper alignment in TRTO/TRTT, and ignoring the M3
field for all TRXX insns without ETF2-ENH.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes execution-hint, load-and-trap,
miscellaneous-instruction-extensions and processor-assist.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes load-on-condition-2 and
load-and-zero-rightmost-byte.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This facility bit includes DFP-rounding, FPR-GR-transfer,
FPS-sign-handling, and IEEE-exception-simulation. We do
support all of these.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
This adds support for the MOVE WITH OPTIONAL SPECIFICATIONS (MVCOS)
instruction. Allow to enable it for the qemu cpu model using
qemu-system-s390x ... -cpu qemu,mvcos=on ...
This allows to boot linux kernel that uses it for uacccess.
We are missing (as for most other part) low address protection checks,
PSW key / storage key checks and support for AR-mode.
We fake an ADDRESSING exception when called from problem state (which
seems to rely on PSW key checks to be in place) and if AR-mode is used.
user mode will always see a PRIVILEDGED exception.
This patch is based on an original patch by Miroslav Benes (thanks!).
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170614133819.18480-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The FAC_ names were placeholders prior to the introduction
of the current facility modeling.
Reviewed-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Richard Henderson <rth@twiddle.net>
pull-seccomp-20170622
# gpg: Signature made Thu 22 Jun 2017 09:01:01 BST
# gpg: using RSA key 0xDF32E7C0F0FFF9A2
# gpg: Good signature from "Eduardo Otubo (Senior Software Engineer) <otubo@redhat.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: D67E 1B50 9374 86B4 0723 DBAB DF32 E7C0 F0FF F9A2
* remotes/otubo/tags/pull-seccomp-20170622:
MAINTAINERS: seccomp: change email contact for Eduardo Otubo
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Programs running inside of QEMU can sometimes use more CPU time than is really
needed. To solve this problem, we just need to throttle the virtual CPU. This
feature will stop laptops from burning up.
This patch adds a menu called Speed that has menu items from 100% to 1% that
represent the speed options. 100% is full speed and 1% is slowest.
Signed-off-by: John Arbuckle <programmingkidx@gmail.com>
Message-id: D6FAAABF-064D-49C0-B572-C73679F34052@gmail.com
[PMM: Moved "mark 100% menu item as checked initially" code to
after menu item is allocated, not before it]
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
As of release 10.12.4, OS X (Sierra) refuses to boot unless the
AppleSMC supports an additional I/O port, expected to provide an
error status code.
Update the [cmd|data]_write() and data_read() methods to implement
the required state machine, and add I/O region & methods to handle
access to the error port.
Originally proposed by Eric Shelton <eshelton@pobox.com> based in
part on FakeSMC (git://git.assembla.com/fakesmc.git).
Signed-off-by: Gabriel Somlo <gsomlo@gmail.com>
Reviewed-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Phil Dennis-Jordan <phil@philjordan.eu>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 1497639316-22202-3-git-send-email-gsomlo@gmail.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Delays in the input layer are special cased input events. Every input
event is accounted for in a global intput queue count. The special cased
delays however did not get removed from the queue, leading to queue overruns
and thus silent key drops after typing quite a few characters.
Signed-off-by: Alexander Graf <agraf@suse.de>
Message-id: 1498117318-162102-1-git-send-email-agraf@suse.de
Fixes: be1a7176 ("input: add support for kbd delays")
Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Drop commented debug logging, add trace points instead.
Also cleanup parser code a bit, the key name is copied into a new
variable instead of patching the input line, that way we can log
the unmodified line.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170606134736.26080-1-kraxel@redhat.com
This is mostly Philippe's updates
We add the following cross-compile targets:
- mipsel-softmmu,mipsel-linux-user,mips64el-linux-user
- armeb-linux-user
While I was rolling I discovered we could also back out a bunch of the
emdebian hacks as the newly released stretch handles cross compilers
as first class citizens. Unfortunately this also meant I had to drop
the powerpc support as that is no longer in Debian stable.
# gpg: Signature made Wed 21 Jun 2017 15:09:50 BST
# gpg: using RSA key 0xFBD0DB095A9E2A44
# gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>"
# Primary key fingerprint: 6685 AE99 E751 67BC AFC8 DF35 FBD0 DB09 5A9E 2A44
* remotes/stsquad/tags/pull-ci-updates-210617-2: (21 commits)
MAINTAINERS: self-appoint me as reviewer in build/test automation
MAINTAINERS: add Shippable automation platform URL
shippable: add mipsel target
shippable: add armeb-linux-user target
shippable: be verbose while building docker images
shippable: do not initialize submodules automatically
shippable: build using all available cpus
shippable: use C locale to simplify console output
docker: add mipsel build target
docker: add extra libs to s390x target to extend codebase coverage
docker: add extra libs to arm64 target to extend codebase coverage
docker: add extra libs to armhf target to extend codebase coverage
docker: use eatmydata in debian arm64 image
docker: use eatmydata in debian armhf image
docker: use eatmydata, install common build packages in base image
docker: use better regex to generate deb-src entries
docker: install ca-certificates package in base image
docker: rebuild image if 'extra files' checksum does not match
docker: add --include-files argument to 'build' command
docker: let _copy_with_mkdir() sub_path argument be optional
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
QAPI patches for 2017-06-09
# gpg: Signature made Tue 20 Jun 2017 13:31:39 BST
# gpg: using RSA key 0x3870B400EB918653
# gpg: Good signature from "Markus Armbruster <armbru@redhat.com>"
# gpg: aka "Markus Armbruster <armbru@pond.sub.org>"
# Primary key fingerprint: 354B C8B3 D7EB 2A6B 6867 4E5F 3870 B400 EB91 8653
* remotes/armbru/tags/pull-qapi-2017-06-09-v2: (41 commits)
tests/qdict: check more get_try_int() cases
console: use get_uint() for "head" property
i386/cpu: use get_uint() for "min-level"/"min-xlevel" properties
numa: use get_uint() for "size" property
pnv-core: use get_uint() for "core-pir" property
pvpanic: use get_uint() for "ioport" property
auxbus: use get_uint() for "addr" property
arm: use get_uint() for "mp-affinity" property
xen: use get_uint() for "max-ram-below-4g" property
pc: use get_uint() for "hpet-intcap" property
pc: use get_uint() for "apic-id" property
pc: use get_uint() for "iobase" property
acpi: use get_uint() for "pci-hole*" properties
acpi: use get_uint() for various acpi properties
acpi: use get_uint() for "acpi-pcihp-io*" properties
platform-bus: use get_uint() for "addr" property
bcm2835_fb: use {get, set}_uint() for "vcram-size" and "vcram-base"
aspeed: use {set, get}_uint() for "ram-size" property
pcihp: use get_uint() for "bsel" property
pc-dimm: make "size" property uint64
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
instead do it in the 'ci' target when needed.
for mips64el-softmmu target:
use dtc submodule if distrib packages are too old.
example with outdated libfdt on mips64el-softmmu target (required is >= 1.4.2):
# dpkg-query --showformat='${Version}\n' --show libfdt-dev
1.4.0+dfsg-1
shippable output:
----------------
LINK mips64el-softmmu/qemu-system-mips64el
../hw/core/loader-fit.o: In function `load_fit':
/root/src/github.com/philmd/qemu/hw/core/loader-fit.c:278: undefined reference to `fdt_first_subnode'
/root/src/github.com/philmd/qemu/hw/core/loader-fit.c:286: undefined reference to `fdt_next_subnode'
/root/src/github.com/philmd/qemu/hw/core/loader-fit.c:277: undefined reference to `fdt_first_subnode'
collect2: error: ld returned 1 exit status
Makefile:201: recipe for target 'qemu-system-mips64el' failed
make[1]: *** [qemu-system-mips64el] Error 1
Makefile:327: recipe for target 'subdir-mips64el-softmmu' failed
make: *** [subdir-mips64el-softmmu] Error 2
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
As of this commit:
$ echo "container proc:" `getconf _NPROCESSORS_ONLN` `getconf _NPROCESSORS_CONF`
container proc: 2 2
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
remove this noise:
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = "en_US.UTF-8",
LC_CTYPE = "en_US.UTF-8",
LANG = "en_US.UTF-8"
are supported and installed on your system.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
The common build packages are: build-essential clang git bison flex
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
[AJB: fixups following stretch update]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Debian has now released Stretch as its new stable. As we track
debian:stable-slim this has a few consequences. For one thing we can
now drop the emdebian hacks as cross compilers are part of the
official repositories now. However we do loose the ability to build
against powerpc (not ppc64) since that is no longer a release
architecture.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Some USB devices have sparse interface numbering which is not able to be
passthroughed.
For example, the Sierra Wireless MC7455/MC7430:
# lsusb -D /dev/bus/usb/003/003 | egrep '1199|9071|bNumInterfaces|bInterfaceNumber'
Device: ID 1199:9071 Sierra Wireless, Inc.
idVendor 0x1199 Sierra Wireless, Inc.
idProduct 0x9071
bNumInterfaces 5
bInterfaceNumber 0
bInterfaceNumber 2
bInterfaceNumber 3
bInterfaceNumber 8
bInterfaceNumber 10
In this case, the interface numbers are 0, 2, 3, 8, 10 and not the
0, 1, 2, 3, 4 that QEMU tries to claim.
This change allows sparse USB interface numbering.
Instead of only claiming the interfaces in the range reported by the USB
device through bNumInterfaces, QEMU attempts to claim all possible
interfaces.
v2 to fix broken v1 patch formatting.
v3 to fix indentation.
Signed-off-by: Samuel Brian <sam.brian@accelerated.com>
Message-id: 20170613234039.27201-1-sam.brian@accelerated.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Add a collection of egl_fb_*() helper functions to manage and use opengl
framebuffers, which is a common pattern in UI code with opengl support.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170614084149.31314-2-kraxel@redhat.com
fix regression from commit 244f144134:
$ make subdir-arm-softmmu
make[1]: *** No rule to make target 'tci.o', needed by 'qemu-system-arm'. Stop.
Makefile:328: recipe for target 'subdir-arm-softmmu' failed
make: *** [subdir-arm-softmmu] Error 2
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Message-id: 20170620163009.21764-1-f4bug@amsat.org
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
# gpg: Signature made Fri 16 Jun 2017 01:18:46 BST
# gpg: using RSA key 0xCA35624C6A9171C6
# gpg: Good signature from "Fam Zheng <famz@redhat.com>"
# gpg: WARNING: This key is not certified with sufficiently trusted signatures!
# gpg: It is not certain that the signature belongs to the owner.
# Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6
* remotes/famz/tags/docker-and-block-pull-request: (23 commits)
block: make accounting thread-safe
block: split BlockAcctStats creation and setup
block: introduce block_account_one_io
block: protect modification of dirty bitmaps with a mutex
migration/block: reset dirty bitmap before reading
block: introduce dirty_bitmap_mutex
block: protect tracked_requests and flush_queue with reqs_lock
block: access write_gen with atomics
block: use Stat64 for wr_highest_offset
util: add stats64 module
throttle-groups: protect throttled requests with a CoMutex
throttle-groups: do not use qemu_co_enter_next
throttle-groups: only start one coroutine from drained_begin
block: access io_plugged with atomic ops
block: access wakeup with atomic ops
block: access serialising_in_flight with atomic ops
block: access io_limits_disabled with atomic ops
block: access quiesce_counter with atomic ops
block: access copy_on_read with atomic ops
docker: Add flex and bison to centos6 image
...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
PIIX4: piix4_pm_add_propeties() defines these with
object_property_add_uint*_ptr().
Q35: ich9_lpc_add_properties() and ich9_pm_add_properties() define them
similarly, except for ACPI_PM_PROP_GPE0_BLK(). That one's getter
ich9_pm_get_gpe0_blk() uses visit_type_uint32().
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-29-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Based on the underlying type of the data accessed, use the appropriate
getters/setters:
* AcpiPmInfo members s3_disabled, s4_disabled are bool, member s4_val is
an uint8_t
* Property ACPI_PCIHP_IO_PROP is defined with
object_property_add_uint32_ptr()
* Property PCIE_HOST_MCFG_SIZE is implemented with visit_type_uint64()
* PCIDevice property "addr" is backed by PCIDevice member devfn, which
is an int32_t
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-20-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[More verbose commit message]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
The getter and setter of TYPE_APIC_COMMON property "id" are
apic_common_get_id() and apic_common_set_id().
apic_common_get_id() reads either APICCommonState member uint32_t
initial_apic_id or uint8_t id into an int64_t local variable. It then
passes this variable to visit_type_int().
apic_common_set_id() uses visit_type_int() to read the value into a
local variable, which it then assigns both to initial_apic_id and id.
While the state backing the property is two unsigned members, 8 and 32
bits wide, the actual visitor is 64 bits signed.
Change getter and setter to use visit_type_uint32(). Then everything's
uint32_t, except for @id.
Suggested-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-19-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Use the actual unsigned integer type name.
The type name change impacts the following externally visible area:
* vl.c's machine_help_func() puts it in help for -machine NAME,help.
* QMP command qom-list exposes it in ObjectPropertyInfo member @type.
* QMP command device-list-properties exposes it in DevicePropertyInfo
member @type.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170607163635.17635-15-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Switch to use QNum/uint where appropriate to remove i64 limitation.
The input visitor will cast i64 input to u64 for compatibility
reasons (existing json QMP client already use negative i64 for large
u64, and expect an implicit cast in qemu).
Note: before the patch, uint64_t values above INT64_MAX are sent over
json QMP as negative values, e.g. UINT64_MAX is sent as -1. After the
patch, they are sent unmodified. Clearly a bug fix, but we have to
consider compatibility issues anyway. libvirt should cope fine,
because its parsing of unsigned integers accepts negative values
modulo 2^64. There's hope that other clients will, too.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170607163635.17635-12-marcandre.lureau@redhat.com>
[check_native_list() tweaked for consistency with signed case]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Before the previous commit, parameter promote_int = true made
visit_start_alternate() with an input visitor avoid QTYPE_QINT
variants and create QTYPE_QFLOAT variants instead. This was used
where QTYPE_QINT variants were invalid.
The previous commit fused QTYPE_QINT with QTYPE_QFLOAT, rendering
promote_int useless and unused.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170607163635.17635-8-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
We would like to use a same QObject type to represent numbers, whether
they are int, uint, or floats. Getters will allow some compatibility
between the various types if the number fits other representations.
Add a few more tests while at it.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170607163635.17635-7-marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
[parse_stats_intervals() simplified a bit, comment in
test_visitor_in_int_overflow() tidied up, suppress bogus warnings]
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Exit to cpu loop so we reevaluate cpu_s390x_hw_interrupts.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can call tb_htable_lookup even when the tb_jmp_cache is completely
empty. Therefore, un-nest most of the code dependent on tb != NULL
from the read from the cache.
This improves the hit rate of lookup_tb_ptr; for instance, when booting
and immediately shutting down debian-arm, the hit rate improves from
93.2% to 99.4%.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
The new placement of the TB means that we can use one insn
to load the goto_tb destination directly from the TB.
Signed-off-by: Richard Henderson <rth@twiddle.net>
The new placement of the TB means that we can use one insn
to load the return value for exit_tb returning the TB pointer.
Tested-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We are partially initializing tb in tb_alloc. Instead, fully
initialize it in tb_gen_code, which is tb_alloc's only caller.
This saves an unnecessary write to tb->cflags.
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1497038122-26364-1-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Allocating an arbitrarily-sized array of tbs results in either
(a) a lot of memory wasted or (b) unnecessary flushes of the code
cache when we run out of TB structs in the array.
An obvious solution would be to just malloc a TB struct when needed,
and keep the TB array as an array of pointers (recall that tb_find_pc()
needs the TB array to run in O(log n)).
Perhaps a better solution, which is implemented in this patch, is to
allocate TB's right before the translated code they describe. This
results in some memory waste due to padding to have code and TBs in
separate cache lines--for instance, I measured 4.7% of padding in the
used portion of code_gen_buffer when booting aarch64 Linux on a
host with 64-byte cache lines. However, it can allow for optimizations
in some host architectures, since TCG backends could safely assume that
the TB and the corresponding translated code are very close to each
other in memory. See this message by rth for a detailed explanation:
https://lists.gnu.org/archive/html/qemu-devel/2017-03/msg05172.html
Subject: Re: GSoC 2017 Proposal: TCG performance enhancements
Message-ID: <1e67644b-4b30-887e-d329-1848e94c9484@twiddle.net>
Suggested-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Pranith Kumar <bobby.prani@gmail.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1496790745-314-3-git-send-email-cota@braap.org>
[rth: Simplify the arithmetic in tcg_tb_alloc]
Signed-off-by: Richard Henderson <rth@twiddle.net>
Add helpers to gather cache info from the host at init-time.
For now, only export the host's I/D cache line sizes, which we
will use to improve cache locality to avoid false sharing.
Suggested-by: Richard Henderson <rth@twiddle.net>
Suggested-by: Geert Martin Ijewski <gm.ijewski@web.de>
Tested-by: Geert Martin Ijewski <gm.ijewski@web.de>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1496794624-4083-1-git-send-email-cota@braap.org>
[rth: Move all implementations from tcg/ppc/]
Signed-off-by: Richard Henderson <rth@twiddle.net>
Previously, dst side will immediately try to lock the write byte upon
receiving QEMU_VM_EOF, but at src side, bdrv_inactivate_all() is only
done after sending it. If the src host is under load, dst may fail to
acquire the lock due to racing with the src unlocking it.
Fix this by hoisting the bdrv_inactivate_all() operation before
QEMU_VM_EOF.
N.B. A further improvement could possibly be done to cleanly handover
locks between src and dst, so that there is no window where a third QEMU
could steal the locks and prevent src and dst from running.
N.B. This commit includes a minor improvement to the error handling
by using qemu_file_set_error().
Reported-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-id: 20170616160658.32290-1-famz@redhat.com
Reviewed-by: Daniel P. Berrange <berrange@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
[PMM: noted qemu_file_set_error() use in commit as suggested by Daniel]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Alternates with both a 'number' and an 'int' branch will become
invalid when the next patch merges of QFloat and QInt into QNum.
More sophisticated alternate code could keep them valid, but since
we have no users outside tests, simply drop the tests.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20170607163635.17635-4-marcandre.lureau@redhat.com>
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Commit e987c37aee ("hw/i386: check if
nvdimm is enabled before plugging") introduced a check to reject nvdimm
hotplug if -machine pc,nvdimm=on was not given.
This check executes after pc_dimm_memory_plug() has already completed
and does not reverse the effect of this function in the case of failure.
Perform the check before calling pc_dimm_memory_plug(). This fixes the
following abort:
$ qemu -M accel=kvm -m 1G,slots=4,maxmem=8G \
-object memory-backend-file,id=mem1,share=on,mem-path=nvdimm.dat,size=1G
(qemu) device_add nvdimm,memdev=mem1
nvdimm is not enabled: missing 'nvdimm' in '-M'
(qemu) device_add nvdimm,memdev=mem1
Core dumped
The backtrace is:
#0 0x00007fffdb5b191f in raise () at /lib64/libc.so.6
#1 0x00007fffdb5b351a in abort () at /lib64/libc.so.6
#2 0x00007fffdb5a9da7 in __assert_fail_base () at /lib64/libc.so.6
#3 0x00007fffdb5a9e52 in () at /lib64/libc.so.6
#4 0x000055555577a5fa in qemu_ram_set_idstr (new_block=0x555556747a00, name=<optimized out>, dev=dev@entry=0x555556705590) at qemu/exec.c:1709
#5 0x0000555555a0fe86 in vmstate_register_ram (mr=mr@entry=0x55555673a0e0, dev=dev@entry=0x555556705590) at migration/savevm.c:2293
#6 0x0000555555965088 in pc_dimm_memory_plug (dev=dev@entry=0x555556705590, hpms=hpms@entry=0x5555566bb0e0, mr=mr@entry=0x555556705630, align=<optimized out>, errp=errp@entry=0x7fffffffc660)
at hw/mem/pc-dimm.c:110
#7 0x000055555581d89b in pc_dimm_plug (errp=0x7fffffffc6c0, dev=0x555556705590, hotplug_dev=<optimized out>) at qemu/hw/i386/pc.c:1713
#8 0x000055555581d89b in pc_machine_device_plug_cb (hotplug_dev=<optimized out>, dev=0x555556705590, errp=0x7fffffffc6c0) at qemu/hw/i386/pc.c:2004
#9 0x0000555555914da6 in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fffffffc7e8) at hw/core/qdev.c:926
Cc: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>
Reviewed-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Move the memcpy upper into where needed, then share the trace so that we
trace every correct remapping.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
First, let vtd_do_iommu_translate() return a status, so that we
explicitly knows whether error occured. Meanwhile, we make sure that
IOMMUTLBEntry is filled in in that.
Then, cleanup vtd_iommu_translate a bit. So even with PT we'll get a log
now. Also, remove useless assignments.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
We have converted many of the DPRINTF() into traces. This patch does the
last 100+ ones.
To debug VT-d when error happens, let's try enable:
-trace enable="vtd_err*"
This should works just like the old GENERAL but of course better, since
we don't need to recompile.
Similar rules apply to the other modules. I was trying to make the
prefix good enough for sub-module debugging.
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
These checks verify that the guest RAM turns from read-write to
"blackhole" when crossing the low boundary of the TSEG. Both the standard
1MB/2MB/8MB TSEG sizes and an extended (16MB) TSEG size are tested.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
A test program can start up QEMU several times, with different command
lines. For such cases, qtest_start() and qtest_end() are called from
within the individual test functions. Examples: "virtio-console-test.c",
"numa-test.c", and many others.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
The q35 machine type currently lets the guest firmware select a 1MB, 2MB
or 8MB TSEG (basically, SMRAM) size. In edk2/OVMF, we use 8MB, but even
that is not enough when a lot of VCPUs (more than approx. 224) are
configured -- SMRAM footprint scales largely proportionally with VCPU
count.
Introduce a new property for "mch" called "extended-tseg-mbytes", which
expresses (in megabytes) the user's choice of TSEG (SMRAM) size.
Invent a new, QEMU-specific register in the config space of the DRAM
Controller, at offset 0x50, in order to allow guest firmware to query the
TSEG (SMRAM) size.
According to Intel Document Number 316966-002, Table 5-1 "DRAM Controller
Register Address Map (D0:F0)":
Warning: Address locations that are not listed are considered Intel
Reserved registers locations. Reads to Reserved registers may
return non-zero values. Writes to reserved locations may
cause system failures.
All registers that are defined in the PCI 2.3 specification,
but are not necessary or implemented in this component are
simply not included in this document. The
reserved/unimplemented space in the PCI configuration header
space is not documented as such in this summary.
Offsets 0x50 and 0x51 are not listed in Table 5-1. They are also not part
of the standard PCI config space header. And they precede the capability
list as well, which starts at 0xe0 for this device.
When the guest writes value 0xffff to this register, the value that can be
read back is that of "mch.extended-tseg-mbytes" -- unless it remains
0xffff. The guest is required to write 0xffff first (as opposed to a
read-only register) because PCI config space is generally not cleared on
QEMU reset, and after S3 resume or reboot, new guest firmware running on
old QEMU could read a guest OS-injected value from this register.
After reading the available "extended" TSEG size, the guest firmware may
actually request that TSEG size by writing pattern 11b to the ESMRAMC
register's TSEG_SZ bit-field. (The Intel spec referenced above defines
only patterns 00b (1MB), 01b (2MB) and 10b (8MB); 11b is reserved.)
On the QEMU command line, the value can be set with
-global mch.extended-tseg-mbytes=N
The default value for 2.10+ q35 machine types is 16. The value is limited
to 0xfff (4095) at the moment, purely so that the product (4095 MB) can be
stored to the uint32_t variable "tseg_size" in mch_update_smram(). Users
are responsible for choosing sensible TSEG sizes.
On 2.9 and earlier q35 machine types, the default value is 0. This lets
the 11b bit pattern in ESMRAMC.TSEG_SZ, and the register at offset 0x50,
keep their original behavior.
When "extended-tseg-mbytes" is nonzero, the new register at offset 0x50 is
set to that value on reset, for completeness.
PCI config space is migrated automatically, so no VMSD changes are
necessary.
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Ref: https://bugzilla.redhat.com/show_bug.cgi?id=1447027
Ref: https://lists.01.org/pipermail/edk2-devel/2017-May/010456.html
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
block_acct_destroy is called unconditionally in blk_delete, but there is
no BlockAcctStats function that is called unconditionally in blk_new.
Split block_acct_init in two, so that it will be possible to create a
QemuMutex in block_acct_init and destroy it in block_acct_cleanup.
Cc: Alberto Garcia <berto@igalia.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20170605123908.18777-19-pbonzini@redhat.com>
Reviewed-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Another possibility is to use tg->lock, which we're holding anyway in
both schedule_next_request and throttle_group_co_io_limits_intercept.
This would require open-coding the CoQueue however, so I've chosen this
alternative.
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20170605123908.18777-10-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Starting all waiting coroutines from bdrv_drain_all is unnecessary;
throttle_group_co_io_limits_intercept calls schedule_next_request as
soon as the coroutine restarts, which in turn will restart the next
request if possible.
If we only start the first request and let the coroutines dance from
there the code is simpler and there is more reuse between
throttle_group_config, throttle_group_restart_blk and timer_cb. The
next patch will benefit from this.
We also stop accessing from throttle_group_restart_blk the
blkp->throttled_reqs CoQueues even when there was no
attached throttling group. This worked but is not pretty.
The only thing that can interrupt the dance is the QEMU_CLOCK_VIRTUAL
timer when switching from one block device to the next, because the
timer is set to "now + 1" but QEMU_CLOCK_VIRTUAL might not be running.
Set that timer to point in the present ("now") rather than the future
and things work.
Reviewed-by: Alberto Garcia <berto@igalia.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20170605123908.18777-8-pbonzini@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Currently there are warnings about flex and bison being missing when
building in the centos6 image:
make[1]: flex: Command not found
BISON dtc-parser.tab.c
make[1]: bison: Command not found
Add them.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
Message-Id: <20170524005206.31916-1-famz@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Fam Zheng <famz@redhat.com>
This commit introduces a vhost-user-scsi backend sample application. It
must be linked with libiscsi and libvhost-user.
To use it, compile with:
$ make vhost-user-scsi
And run as follows:
$ ./vhost-user-scsi -u vus.sock -i iscsi://uri_to_target/
$ qemu-system-x86_64 --enable-kvm -m 512 \
-object memory-backend-file,id=mem,size=512m,share=on,mem-path=guestmem \
-numa node,memdev=mem \
-chardev socket,id=vhost-user-scsi,path=vus.sock \
-device vhost-user-scsi-pci,chardev=vhost-user-scsi \
The application is currently limited at one LUN only and it processes
requests synchronously (therefore only achieving QD1). The purpose of
the code is to show how a backend can be implemented and to test the
vhost-user-scsi Qemu implementation.
If a different instance of this vhost-user-scsi application is executed
at a remote host, a VM can be live migrated to such a host.
Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
Message-Id: <1488479153-21203-5-git-send-email-felipe@nutanix.com>
This commit introduces a vhost-user device for SCSI. This is based
on the existing vhost-scsi implementation, but done over vhost-user
instead. It also uses a chardev to connect to the backend. Unlike
vhost-scsi (today), VMs using vhost-user-scsi can be live migrated.
To use it, start Qemu with a command line equivalent to:
qemu-system-x86_64 \
-chardev socket,id=vus0,path=/tmp/vus.sock \
-device vhost-user-scsi-pci,chardev=vus0,bus=pci.0,addr=...
A separate commit presents a sample application linked with libiscsi to
provide a backend for vhost-user-scsi.
Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
Message-Id: <1488479153-21203-4-git-send-email-felipe@nutanix.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This is for the future interoperability & management guide. It includes
the QAPI docs, including the automatically generated ones, other socket
protocols (vhost-user, VNC), and the qcow2 file format.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Functions nbd_negotiate_{read,write,drop_sync} were introduced in
1a6245a5b, when nbd_rwv (was nbd_wr_sync) was working through
qemu_co_sendv_recvv (the path is nbd_wr_sync -> qemu_co_{recv/send} ->
qemu_co_send_recv -> qemu_co_sendv_recvv), which just yields, without
setting any handlers. But starting from ff82911cd nbd_rwv (was
nbd_wr_syncv) works through qio_channel_yield() which sets handlers, so
watchers are redundant in nbd_negotiate_{read,write,drop_sync}, then,
let's just use nbd_{read,write,drop} functions.
Functions nbd_{read,write,drop} has errp parameter, which is unused in
this patch. This will be fixed later.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170602150150.258222-4-vsementsov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Rename
nbd_wr_syncv -> nbd_rwv
read_sync -> nbd_read
read_sync_eof -> nbd_read_eof
write_sync -> nbd_write
drop_sync -> nbd_drop
1. nbd_ prefix
read_sync and write_sync are already shared, so it is good to have a
namespace prefix. drop_sync will be shared, and read_sync_eof is
related to read_sync, so let's rename them all.
2. _sync suffix
_sync is related to the fact that nbd_wr_syncv doesn't return if a
write to socket returns EAGAIN. The first implementation of
nbd_wr_syncv (was wr_sync in 7a5ca8648b) just loops while getting
EAGAIN, the current implementation yields in this case.
Why we want to get rid of it:
- it is normal for r/w functions to be synchronous, so having an
additional suffix for it looks redundant (contrariwise, we have
_aio suffix for async functions)
- _sync suffix in block layer is used when function does flush (so
using it for other thing is confusing a bit)
- keep function names short after adding nbd_ prefix
3. for nbd_wr_syncv let's use more common notation 'rw'
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Message-Id: <20170602150150.258222-2-vsementsov@virtuozzo.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
there are some types of accelerators in qemu, and all accelerators
have their own file except tcg. tcg accelerator is also defined in
accel.c file. tcg accelerator file will be splited from accel.c and
re-name to tcg-all.c. accel/ directory will be created to include
kvm and tcg related files.
Signed-off-by: Yang Zhong <yang.zhong@intel.com>
Message-Id: <1496383606-18060-2-git-send-email-yang.zhong@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
qemu proper has done so for 13 years
(8a7ddc38a6), qemu-img and qemu-io have
done so for four years (526eda14a6).
Ignoring this signal is especially important in qemu-nbd because
otherwise a client can easily take down the qemu-nbd server by dropping
the connection when the server wants to send something, for example:
$ qemu-nbd -x foo -f raw -t null-co:// &
[1] 12726
$ qemu-io -c quit nbd://localhost/bar
can't open device nbd://localhost/bar: No export with name 'bar' available
[1] + 12726 broken pipe qemu-nbd -x foo -f raw -t null-co://
In this case, the client sends an NBD_OPT_ABORT and closes the
connection (because it is not required to wait for a reply), but the
server replies with an NBD_REP_ACK (because it is required to reply).
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-Id: <20170611123714.31292-1-mreitz@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Back in qemu 2.5, qemu-nbd was immune to port probes (a transient
server would not quit, regardless of how many probe connections
came and went, until a connection actually negotiated). But we
broke that in commit ee7d7aa when removing the return value to
nbd_client_new(), although that patch also introduced a bug causing
an assertion failure on a client that fails negotiation. We then
made it worse during refactoring in commit 1a6245a (a segfault
before we could even assert); the (masked) assertion was cleaned
up in d3780c2 (still in 2.6), and just recently we finally fixed
the segfault ("nbd: Fully intialize client in case of failed
negotiation"). But that still means that ever since we added
TLS support to qemu-nbd, we have been vulnerable to an ill-timed
port-scan being able to cause a denial of service by taking down
qemu-nbd before a real client has a chance to connect.
Since negotiation is now handled asynchronously via coroutines,
we no longer have a synchronous point of return by re-adding a
return value to nbd_client_new(). So this patch instead wires
things up to pass the negotiation status through the close_fn
callback function.
Simple test across two terminals:
$ qemu-nbd -f raw -p 30001 file
$ nmap 127.0.0.1 -p 30001 && \
qemu-io -c 'r 0 512' -f raw nbd://localhost:30001
Note that this patch does not change what constitutes successful
negotiation (thus, a client must enter transmission phase before
that client can be considered as a reason to terminate the server
when the connection ends). Perhaps we may want to tweak things
in a later patch to also treat a client that uses NBD_OPT_ABORT
as being a 'successful' negotiation (the client correctly talked
the NBD protocol, and informed us it was not going to use our
export after all), but that's a discussion for another day.
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1451614
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-Id: <20170608222617.20376-1-eblake@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
While at it, drop the current_cpu assignment since this is a
per-thread variable on modern QEMU.
Cc: Vincent Palatin <vpalatin@chromium.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Commit bde4d9205 ("Fix the -accel parameter and the documentation for
'hax'") introduced a regression by adding a new local accel_opts
variable which shadows the variable with the same name that is
declared at the beginning of the main() scope. This causes the
qemu_tcg_configure() call later to be always called with NULL, so
that the thread=xxx option gets ignored. Fix it by removing the
local accel_opts variable and use "opts" instead, which is meant
for storing temporary QemuOpts values.
And while we're at it, also change the exit(1) here to exit(0)
since asking for help is not an error.
Fixes: bde4d9205e
Reported-by: Markus Armbruster <armbru@redhat.com>
Reported-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1496899257-25800-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
When doing a "make -j10" in the vanilla QEMU source tree (without
running "configure" first), the Makefile currently generates two
files already, qemu-version.h and qemu-options.def. This should not
happen, so let's only build the generated files if config-host.mak
is available (i.e. "configure" has been run already).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-Id: <1496926799-13040-1-git-send-email-thuth@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
This ensures that the request is unref'ed properly, and avoids a
segmentation fault in the new qtest testcase that is added.
This is CVE-2017-9503.
Reported-by: Zhangyanyu <zyy4013@stu.ouc.edu.cn>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Avoid TOC-TOU bugs by passing the frame_cmd down, and checking
cmd->dcmd_opcode instead of cmd->frame->header.frame_cmd.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Instead of having its own mmap handling code, reuse the code from
exec.c.
Note: memory_region_init_ram_from_fd() adds some restrictions
(check for xen, kvm sync-mmu, etc) and changes (such as size
alignment). This may actually be more correct.
Signed-off-by: Marc-André Lureau <marcandre.lureau@redhat.com>
Message-Id: <20170602141229.15326-6-marcandre.lureau@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
V flag for subtraction is:
v = (res ^ src1) & (src1 ^ src2)
(see COMPUTE_CCR() in target/m68k/helper.c)
But gen_flush_flags() uses:
v = (res ^ src2) & (src1 ^ src2)
The problem has been found with the following program:
.global _start
_start:
move.l #-2147483648,%d0
subq.l #1,%d0
jvc 1f
move.l #1,%d1
move.l #1,%d0
trap #0
1:
move.l #0,%d1
move.l #1,%d0
trap #0
It works fine (exit(1)) on real hardware, and with "-singlestep".
"-singlestep" uses gen_helper_flush_flags(), whereas
without "-singlestep", V flag is computed directly in
gen_flush_flags().
This patch updates gen_flush_flags() to have the same result
as with gen_helper_flush_flags().
Signed-off-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20170614203905.19657-1-laurent@vivier.eu>
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
--
I removed the [HACK] part because previous patch just check that
compression pages are not received.
Right now, if we receive a compressed page while this features are
disabled, Bad Things (TM) can happen. Just add a test for them.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Peter Xu <peterx@redhat.com>
--
I had XBZRLE here also, but it don't need extra resources on
destination, only on source. Additionally libvirt don't enable it on
destination, so don't put it here.
- initialize invalid_flags at declaration time.
- remove extra space (peter)
0425dc9 is actually v1 of that patch, but it was accidentally
merged (while there was a v2). That will cause problem when we try to
migrate to some old QEMUs when return path is not really there. Let's
fix it, then squashing this patch with 0425dc9 will be exactly patch
content of v2.
Fixes: 0425dc9 ("migration: isolate return path on src")
Signed-off-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
That typedefs are needed on both files. New compilers (F25 where I
work) don't complain about repeating a typedef. But older ones
complain.
Signed-off-by: Juan Quintela <quintela@redhat.com>
Reviewed-by: Greg Kurz <groug@kaod.org>
Let's properly expose the CPU type (machine-type number) via "STORE CPU
ID" and "STORE SUBSYSTEM INFORMATION".
As TCG emulates basic mode, the CPU identification number has the format
"Annnnn", whereby A is the CPU address, and n are parts of the CPU serial
number (0 for us for now).
A specification exception will be injected if the address is not aligned
to a double word. Low address protection will not be checked as
we're missing some more general support for that.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170609133426.11447-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
We can tell from the program interrupt code, whether a program interrupt
has to forward the address in the PGM new PSW
(suppressing/terminated/completed) to point at the next instruction, or
if it is nullifying and the PSW address does not have to be incremented.
So let's not modify the PSW address outside of the injection path and
handle this internally. We just have to handle instruction length
auto detection if no valid instruction length can be provided.
This should fix various program interrupt injection paths, where the
PSW was not properly forwarded.
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20170609142156.18767-3-david@redhat.com>
Signed-off-by: Richard Henderson <rth@twiddle.net>
Can happen with usb-storage devices: ehci_work_bh calls usb-storage,
usb-storage calls into block layer, block layer may run BHs.
Add a simple bool and just do nothing in case we figure ehci_work_bh is
active.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Message-id: 20170612073109.25930-1-kraxel@redhat.com
/* Dynamic hotplug of PV filesystems at runtime is not supported. */
}
structXenDevOpsxen_9pfs_ops={
.size=sizeof(Xen9pfsDev),
.flags=DEVOPS_FLAG_NEED_GNTDEV,
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.