Compare commits

...

28 Commits

Author SHA1 Message Date
Dinar Valeev
0ee4de5840 hw/input/hid.c Fix capslock hid code
When ever USB keyboard is used, e.g. '-usbdevice keyboard' pressing
caps lock key send 0x32 hid code, which is treated as backslash.
Instead it should be 0x39 code. This affects sending uppercase keys,
as they typed whith caps lock active.

While on x86 this can be workarounded by using ps/2 protocol. On
Power it is crusial as we don't have anything else than USB.

This is fixes guest automation tasts over vnc.

Signed-off-by: Dinar Valeev <dvaleev@suse.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2015-01-22 12:19:48 +01:00
Gerd Hoffmann
ba4d26064e hid: handle full ptr queues in post_load
Cc: Dr. David Alan Gilbert <dgilbert@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Tested-by: Gonglei <arei.gonglei@huawei.com>
Reviewed-by: Gonglei <arei.gonglei@huawei.com>
2015-01-22 12:19:48 +01:00
Gerd Hoffmann
4083ae311d input: improve docs for input-send-event qmp command
Text partly suggested by Markus Armbruster <armbru@redhat.com>

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2015-01-22 12:19:48 +01:00
Peter Maydell
699eae17b8 Merge remote-tracking branch 'remotes/pmaydell/tags/pull-misc-20150120' into staging
Miscellaneous cross-tree patches:
 * load/store helper cleanup
 * drop TARGET_HAS_ICE define and checks
 * scripts/qapi-types.py: Add dummy member to empty structs
 * cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined

# gpg: Signature made Tue 20 Jan 2015 15:43:38 GMT using RSA key ID 14360CDE
# gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>"

* remotes/pmaydell/tags/pull-misc-20150120:
  cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined
  cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
  cpu_ldst_template.h: Drop unused cpu_ldfq/stfq/ldfl/stfl accessors
  cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()
  cpu_ldst_template.h: Use ld*_p directly rather than via ld*_raw macros
  cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessors
  cpu_ldst.h: Remove unused very short ld*/st* defines
  cpu_ldst.h: Drop unused ld/st*_kernel defines
  target-mips: Don't use _raw load/store accessors
  linux-user/main.c (m68k): Use get_user_u16 rather than lduw in cpu_loop
  linux-user/vm86.c: Use cpu_ldl_data &c rather than plain ldl &c
  bsd-user/elfload.c: Don't use ldl() or ldq_raw()
  linux-user/elfload.c: Don't use _raw accessor functions
  target-sparc: Don't use {ld, st}*_raw functions
  monitor.c: Use ld*_p() instead of ld*_raw()
  cpu_ldst.h: Remove unused ldul_ macros
  exec.c: Drop TARGET_HAS_ICE define and checks
  scripts/qapi-types.py: Add dummy member to empty structs

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-01-20 16:19:58 +00:00
Peter Maydell
de5ee4a888 cpu_ldst.h: Don't define helpers if MMU_MODE*_SUFFIX not defined
Not all targets define a full set of suffix strings for the
NB_MMU_MODES that they have. In this situation, don't define any
helper functions for that mode, rather than defining helper functions
with no suffix at all. The MMU mode is still functional; it is merely
not directly accessible via cpu_ld*_MODE from target helper functions.

Also add an "NB_MMU_MODES >= 2" check to the definition of the mode 1
helpers -- some targets only define one MMU mode.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1421432008-6786-1-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:35 +00:00
Peter Maydell
db5fd8d709 cpu_ldst.h, cpu-all.h, bswap.h: Update documentation on ld/st accessors
Add documentation of what the cpu_*_* accessors look like.
Correct some minor errors in the existing documentation of the
direct _p accessor family. Remove the near-duplicate comment
on the _p accessors from cpu-all.h and replace it with a reference
to the comment in bswap.h.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-16-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:35 +00:00
Peter Maydell
82f11917c9 cpu_ldst_template.h: Drop unused cpu_ldfq/stfq/ldfl/stfl accessors
The cpu_ldfq/stfq/ldfl/stfl accessors for loading and storing
float32 and float64 are completely unused, so delete them.
(The union they use for converting from the float32/float64
type to uint32_t or uint64_t is the wrong way to do it anyway:
they should be using make_float* and float*_val.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-15-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
800e2ecc89 cpu_ldst.h: Drop unused _raw macros, saddr() and laddr()
The _raw macros and their helpers saddr() and laddr() are now
totally unused -- delete them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-14-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
355392329e cpu_ldst_template.h: Use ld*_p directly rather than via ld*_raw macros
The ld*_raw and st*_raw macros are now only used within the code
produced by cpu_ldst_template.h, and only in three places.
Expand these out to just call the ld_p and st_p functions directly.

Note that in all the callsites the address argument is a uintptr_t,
so we can drop that part of the double-cast used in the saddr() and
laddr() macros.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-13-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
9220fe54c6 cpu_ldst.h: Use inline functions for usermode cpu_ld/st accessors
Use inline functions rather than macros for cpu_ld/st accessors
for the *-user configurations, as we already do for softmmu.
This has a two advantages:
 * we can actually typecheck our arguments
 * we don't need to leak the _raw macros everywhere

Since the _kernel functions were only used by target-i386/seg_helper.c,
put the definitions for them in that file too. (It already has the
similar template include code to define them for the softmmu case,
so it makes sense to have it deal with defining them for user-only.)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-12-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
177ea79f65 cpu_ldst.h: Remove unused very short ld*/st* defines
The very short ld*/st* defines are now not used anywhere; delete them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-11-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
5a0826f7d2 cpu_ldst.h: Drop unused ld/st*_kernel defines
The ld*_kernel and st*_kernel defines are not used anywhere;
delete them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-10-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:34 +00:00
Peter Maydell
1535300119 target-mips: Don't use _raw load/store accessors
Use cpu_*_data instead of the direct *_raw load/store accessors.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-9-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:33 +00:00
Peter Maydell
d8d5119cae linux-user/main.c (m68k): Use get_user_u16 rather than lduw in cpu_loop
In the m68k cpu_loop() use get_user_u16 to read the immediate for
the simcall rahter than lduw, to bring it into line with how other
archs do it and to remove another user of the ldl family of functions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-8-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:33 +00:00
Peter Maydell
5899d6d0b4 linux-user/vm86.c: Use cpu_ldl_data &c rather than plain ldl &c
Use the cpu_ld*_data and cpu_st*_data family of functions to access
guest memory in vm86.c rather than the very short-named ldl/stl functions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-7-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:33 +00:00
Peter Maydell
b8d6ac9f90 bsd-user/elfload.c: Don't use ldl() or ldq_raw()
Use get_user_u64() and get_user_ual() instead of the ldl() and
ldq_raw() functions.

[Note that this change is not compile tested as it is actually
in dead code -- none of the bsd-user configurations are PPC.]

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-6-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:33 +00:00
Peter Maydell
2ccf97ec0f linux-user/elfload.c: Don't use _raw accessor functions
The _raw accessor functions are an implementation detail that has
leaked out to some callsites. Use get_user_u64() instead of ldq_raw().

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-5-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:33 +00:00
Peter Maydell
eb513f82f0 target-sparc: Don't use {ld, st}*_raw functions
Instead of using the _raw family of ld/st accessor functions, use
cpu_*_data. All this code is CONFIG_USER_ONLY, so the two are the
same semantically, but the _raw functions are really a detail of
the implementation which has leaked into a few callsites like this one.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-4-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
Peter Maydell
24e60305c5 monitor.c: Use ld*_p() instead of ld*_raw()
The monitor code for doing a memory_dump() was using ld*_raw() to do
target-CPU accesses out of a local buf[] array. The correct functions
for this purpose are ld*_p(), which take a host pointer, rather than
ld*_raw(), which take an integer representing a guest address and
are somewhat meaningless in softmmu configurations. Nobody noticed
because for softmmu the _raw functions are the same as ldl_p but
with some extra casts thrown in. Switch to using the correct functions
instead.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
Message-id: 1421334118-3287-3-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
Peter Maydell
0c021c1fd2 cpu_ldst.h: Remove unused ldul_ macros
The five ldul_ macros are not used anywhere and are marked up with an XXX
comment. "ldul" is a non-standard prefix for our family of load instructions:
we don't mark 32-bit accesses for signedness because they return a 32 bit
quantity. So just delete them.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1421334118-3287-2-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
Peter Maydell
ec53b45bcd exec.c: Drop TARGET_HAS_ICE define and checks
The TARGET_HAS_ICE #define is intended to indicate whether a target-*
guest CPU implementation supports the breakpoint handling. However,
all our guest CPUs have that support (the only two which do not
define TARGET_HAS_ICE are unicore32 and openrisc, and in both those
cases the bp support is present and the lack of the #define is just
a bug). So remove the #define entirely: all new guest CPU support
should include breakpoint handling as part of the basic implementation.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-id: 1420484960-32365-1-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
Peter Maydell
83ecb22ba2 scripts/qapi-types.py: Add dummy member to empty structs
Make sure that all generated C structs have at least one field; this
avoids potential issues with attempting to malloc space for
zero-length structs in C (g_malloc(sizeof struct) would return NULL).
It also avoids an incompatibility with C++ (where an empty struct is
size 1); that isn't important to us now but might be in future.

Generated empty structures look like this:
    struct Abort
    {
        char qapi_dummy_field_for_empty_struct;
    };

This silences clang warnings like:
./qapi-types.h:3752:1: warning: empty struct has size 0 in C, size 1 in C++ [-Wextern-c-compat]
struct Abort
^

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Eric Blake <eblake@redhat.com>
Message-id: 1419359069-16611-1-git-send-email-peter.maydell@linaro.org
2015-01-20 15:19:32 +00:00
Peter Maydell
a5bd4470ed Merge remote-tracking branch 'remotes/sstabellini/xen-2015-01-20-v2' into staging
* remotes/sstabellini/xen-2015-01-20-v2:
  xen: add a lock for the mapcache
  xen: do not use __-named variables in mapcache
  Xen: Use the ioreq-server API when available
  Add device listener interface

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-01-20 14:34:38 +00:00
Paolo Bonzini
86a6a9bf55 xen: add a lock for the mapcache
Extend the existing dummy mapcache_lock/unlock macros to cover all of
xen-mapcache.c.  This prepares for unlocked memory access, when parts
of exec.c will not be protected by the BQL.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015-01-20 14:24:17 +00:00
Paolo Bonzini
9b6d7b365d xen: do not use __-named variables in mapcache
Keep the namespace clean.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015-01-20 14:24:13 +00:00
Paul Durrant
3996e85c18 Xen: Use the ioreq-server API when available
The ioreq-server API added to Xen 4.5 offers better security than
the existing Xen/QEMU interface because the shared pages that are
used to pass emulation request/results back and forth are removed
from the guest's memory space before any requests are serviced.
This prevents the guest from mapping these pages (they are in a
well known location) and attempting to attack QEMU by synthesizing
its own request structures. Hence, this patch modifies configure
to detect whether the API is available, and adds the necessary
code to use the API if it is.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
2015-01-20 14:24:10 +00:00
Paul Durrant
707ff80021 Add device listener interface
The Xen ioreq-server API, introduced in Xen 4.5, requires that PCI device
models explicitly register with Xen for config space accesses. This patch
adds a listener interface into qdev-core which can be used by the Xen
interface code to monitor for arrival and departure of PCI devices.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
2015-01-20 14:24:07 +00:00
Peter Maydell
74acb99737 Merge remote-tracking branch 'remotes/kraxel/tags/pull-console-20150119-1' into staging
ui: add shared surface format negotiation.

# gpg: Signature made Mon 19 Jan 2015 12:47:36 GMT using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>"

* remotes/kraxel/tags/pull-console-20150119-1:
  ui/sdl2: Support shared surface for more pixman formats
  ui/sdl: Support shared surface for more pixman formats
  ui/gtk: Support shared surface for most pixman formats
  ui/spice: Support shared surface for most pixman formats
  ui/vnc: Support shared surface for most pixman formats
  ui/pixman: add qemu_pixman_check_format
  ui: Add dpy_gfx_check_format() to check backend shared surface support
  ui: Make qemu_default_pixman_format() return 0 on unsupported formats

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2015-01-19 13:37:05 +00:00
40 changed files with 795 additions and 356 deletions

View File

@@ -351,8 +351,10 @@ static inline void init_thread(struct target_pt_regs *_regs, struct image_info *
_regs->gpr[1] = infop->start_stack;
#if defined(TARGET_PPC64) && !defined(TARGET_ABI32)
entry = ldq_raw(infop->entry) + infop->load_addr;
toc = ldq_raw(infop->entry + 8) + infop->load_addr;
get_user_u64(entry, infop->entry);
entry += infop->load_addr;
get_user_u64(toc, infop->entry + 8);
toc += infop->load_addr;
_regs->gpr[2] = toc;
infop->entry = entry;
#endif
@@ -365,8 +367,9 @@ static inline void init_thread(struct target_pt_regs *_regs, struct image_info *
get_user_ual(_regs->gpr[3], pos);
pos += sizeof(abi_ulong);
_regs->gpr[4] = pos;
for (tmp = 1; tmp != 0; pos += sizeof(abi_ulong))
tmp = ldl(pos);
for (tmp = 1; tmp != 0; pos += sizeof(abi_ulong)) {
get_user_ual(tmp, pos);
}
_regs->gpr[5] = pos;
}

29
configure vendored
View File

@@ -1869,6 +1869,32 @@ EOF
#if !defined(HVM_MAX_VCPUS)
# error HVM_MAX_VCPUS not defined
#endif
int main(void) {
xc_interface *xc;
xs_daemon_open();
xc = xc_interface_open(0, 0, 0);
xc_hvm_set_mem_type(0, 0, HVMMEM_ram_ro, 0, 0);
xc_gnttab_open(NULL, 0);
xc_domain_add_to_physmap(0, 0, XENMAPSPACE_gmfn, 0, 0);
xc_hvm_inject_msi(xc, 0, 0xf0000000, 0x00000000);
xc_hvm_create_ioreq_server(xc, 0, 0, NULL);
return 0;
}
EOF
compile_prog "" "$xen_libs"
then
xen_ctrl_version=450
xen=yes
elif
cat > $TMPC <<EOF &&
#include <xenctrl.h>
#include <xenstore.h>
#include <stdint.h>
#include <xen/hvm/hvm_info_table.h>
#if !defined(HVM_MAX_VCPUS)
# error HVM_MAX_VCPUS not defined
#endif
int main(void) {
xc_interface *xc;
xs_daemon_open();
@@ -4283,6 +4309,9 @@ if test -n "$sparc_cpu"; then
echo "Target Sparc Arch $sparc_cpu"
fi
echo "xen support $xen"
if test "$xen" = "yes" ; then
echo "xen ctrl version $xen_ctrl_version"
fi
echo "brlapi support $brlapi"
echo "bluez support $bluez"
echo "Documentation $docs"

16
exec.c
View File

@@ -553,7 +553,6 @@ void cpu_exec_init(CPUArchState *env)
}
}
#if defined(TARGET_HAS_ICE)
#if defined(CONFIG_USER_ONLY)
static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
{
@@ -569,7 +568,6 @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
}
}
#endif
#endif /* TARGET_HAS_ICE */
#if defined(CONFIG_USER_ONLY)
void cpu_watchpoint_remove_all(CPUState *cpu, int mask)
@@ -689,7 +687,6 @@ static inline bool cpu_watchpoint_address_matches(CPUWatchpoint *wp,
int cpu_breakpoint_insert(CPUState *cpu, vaddr pc, int flags,
CPUBreakpoint **breakpoint)
{
#if defined(TARGET_HAS_ICE)
CPUBreakpoint *bp;
bp = g_malloc(sizeof(*bp));
@@ -710,15 +707,11 @@ int cpu_breakpoint_insert(CPUState *cpu, vaddr pc, int flags,
*breakpoint = bp;
}
return 0;
#else
return -ENOSYS;
#endif
}
/* Remove a specific breakpoint. */
int cpu_breakpoint_remove(CPUState *cpu, vaddr pc, int flags)
{
#if defined(TARGET_HAS_ICE)
CPUBreakpoint *bp;
QTAILQ_FOREACH(bp, &cpu->breakpoints, entry) {
@@ -728,27 +721,21 @@ int cpu_breakpoint_remove(CPUState *cpu, vaddr pc, int flags)
}
}
return -ENOENT;
#else
return -ENOSYS;
#endif
}
/* Remove a specific breakpoint by reference. */
void cpu_breakpoint_remove_by_ref(CPUState *cpu, CPUBreakpoint *breakpoint)
{
#if defined(TARGET_HAS_ICE)
QTAILQ_REMOVE(&cpu->breakpoints, breakpoint, entry);
breakpoint_invalidate(cpu, breakpoint->pc);
g_free(breakpoint);
#endif
}
/* Remove all matching breakpoints. */
void cpu_breakpoint_remove_all(CPUState *cpu, int mask)
{
#if defined(TARGET_HAS_ICE)
CPUBreakpoint *bp, *next;
QTAILQ_FOREACH_SAFE(bp, &cpu->breakpoints, entry, next) {
@@ -756,14 +743,12 @@ void cpu_breakpoint_remove_all(CPUState *cpu, int mask)
cpu_breakpoint_remove_by_ref(cpu, bp);
}
}
#endif
}
/* enable or disable single step mode. EXCP_DEBUG is returned by the
CPU loop after each instruction */
void cpu_single_step(CPUState *cpu, int enabled)
{
#if defined(TARGET_HAS_ICE)
if (cpu->singlestep_enabled != enabled) {
cpu->singlestep_enabled = enabled;
if (kvm_enabled()) {
@@ -775,7 +760,6 @@ void cpu_single_step(CPUState *cpu, int enabled)
tb_flush(env);
}
}
#endif
}
void cpu_abort(CPUState *cpu, const char *fmt, ...)

View File

@@ -189,6 +189,56 @@ int qdev_init(DeviceState *dev)
return 0;
}
static QTAILQ_HEAD(device_listeners, DeviceListener) device_listeners
= QTAILQ_HEAD_INITIALIZER(device_listeners);
enum ListenerDirection { Forward, Reverse };
#define DEVICE_LISTENER_CALL(_callback, _direction, _args...) \
do { \
DeviceListener *_listener; \
\
switch (_direction) { \
case Forward: \
QTAILQ_FOREACH(_listener, &device_listeners, link) { \
if (_listener->_callback) { \
_listener->_callback(_listener, ##_args); \
} \
} \
break; \
case Reverse: \
QTAILQ_FOREACH_REVERSE(_listener, &device_listeners, \
device_listeners, link) { \
if (_listener->_callback) { \
_listener->_callback(_listener, ##_args); \
} \
} \
break; \
default: \
abort(); \
} \
} while (0)
static int device_listener_add(DeviceState *dev, void *opaque)
{
DEVICE_LISTENER_CALL(realize, Forward, dev);
return 0;
}
void device_listener_register(DeviceListener *listener)
{
QTAILQ_INSERT_TAIL(&device_listeners, listener, link);
qbus_walk_children(sysbus_get_default(), NULL, NULL, device_listener_add,
NULL, NULL);
}
void device_listener_unregister(DeviceListener *listener)
{
QTAILQ_REMOVE(&device_listeners, listener, link);
}
static void device_realize(DeviceState *dev, Error **errp)
{
DeviceClass *dc = DEVICE_GET_CLASS(dev);
@@ -994,6 +1044,8 @@ static void device_set_realized(Object *obj, bool value, Error **errp)
goto fail;
}
DEVICE_LISTENER_CALL(realize, Forward, dev);
hotplug_ctrl = qdev_get_hotplug_handler(dev);
if (hotplug_ctrl) {
hotplug_handler_plug(hotplug_ctrl, dev, &local_err);
@@ -1035,6 +1087,7 @@ static void device_set_realized(Object *obj, bool value, Error **errp)
dc->unrealize(dev, local_errp);
}
dev->pending_deleted_event = true;
DEVICE_LISTENER_CALL(unrealize, Reverse, dev);
}
if (local_err != NULL) {

View File

@@ -41,7 +41,7 @@ static const uint8_t hid_usage_keys[0x100] = {
0x07, 0x09, 0x0a, 0x0b, 0x0d, 0x0e, 0x0f, 0x33,
0x34, 0x35, 0xe1, 0x31, 0x1d, 0x1b, 0x06, 0x19,
0x05, 0x11, 0x10, 0x36, 0x37, 0x38, 0xe5, 0x55,
0xe2, 0x2c, 0x32, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e,
0xe2, 0x2c, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e,
0x3f, 0x40, 0x41, 0x42, 0x43, 0x53, 0x47, 0x5f,
0x60, 0x61, 0x56, 0x5c, 0x5d, 0x5e, 0x57, 0x59,
0x5a, 0x5b, 0x62, 0x63, 0x00, 0x00, 0x00, 0x44,
@@ -514,6 +514,27 @@ static int hid_post_load(void *opaque, int version_id)
HIDState *s = opaque;
hid_set_next_idle(s);
if (s->n == QUEUE_LENGTH && (s->kind == HID_TABLET ||
s->kind == HID_MOUSE)) {
/*
* Handle ptr device migration from old qemu with full queue.
*
* Throw away everything but the last event, so we propagate
* at least the current button state to the guest. Also keep
* current position for the tablet, signal "no motion" for the
* mouse.
*/
HIDPointerEvent evt;
evt = s->ptr.queue[(s->head+s->n) & QUEUE_MASK];
if (s->kind == HID_MOUSE) {
evt.xdx = 0;
evt.ydy = 0;
}
s->ptr.queue[0] = evt;
s->head = 0;
s->n = 1;
}
return 0;
}

View File

@@ -115,43 +115,9 @@ static inline void tswap64s(uint64_t *s)
#define bswaptls(s) bswap64s(s)
#endif
/* CPU memory access without any memory or io remapping */
/*
* the generic syntax for the memory accesses is:
*
* load: ld{type}{sign}{size}{endian}_{access_type}(ptr)
*
* store: st{type}{size}{endian}_{access_type}(ptr, val)
*
* type is:
* (empty): integer access
* f : float access
*
* sign is:
* (empty): for floats or 32 bit size
* u : unsigned
* s : signed
*
* size is:
* b: 8 bits
* w: 16 bits
* l: 32 bits
* q: 64 bits
*
* endian is:
* (empty): target cpu endianness or 8 bit access
* r : reversed target cpu endianness (not implemented yet)
* be : big endian (not implemented yet)
* le : little endian (not implemented yet)
*
* access_type is:
* raw : host memory access
* user : user mode access using soft MMU
* kernel : kernel mode access using soft MMU
/* Target-endianness CPU memory access functions. These fit into the
* {ld,st}{type}{sign}{size}{endian}_p naming scheme described in bswap.h.
*/
/* target-endianness CPU memory access functions */
#if defined(TARGET_WORDS_BIGENDIAN)
#define lduw_p(p) lduw_be_p(p)
#define ldsw_p(p) ldsw_be_p(p)

View File

@@ -23,7 +23,26 @@
*
* Used by target op helpers.
*
* MMU mode suffixes are defined in target cpu.h.
* The syntax for the accessors is:
*
* load: cpu_ld{sign}{size}_{mmusuffix}(env, ptr)
*
* store: cpu_st{sign}{size}_{mmusuffix}(env, ptr, val)
*
* sign is:
* (empty): for 32 and 64 bit sizes
* u : unsigned
* s : signed
*
* size is:
* b: 8 bits
* w: 16 bits
* l: 32 bits
* q: 64 bits
*
* mmusuffix is one of the generic suffixes "data" or "code", or
* (for softmmu configs) a target-specific MMU mode suffix as defined
* in target cpu.h.
*/
#ifndef CPU_LDST_H
#define CPU_LDST_H
@@ -53,113 +72,44 @@
h2g_nocheck(x); \
})
#define saddr(x) g2h(x)
#define laddr(x) g2h(x)
#else /* !CONFIG_USER_ONLY */
/* NOTE: we use double casts if pointers and target_ulong have
different sizes */
#define saddr(x) (uint8_t *)(intptr_t)(x)
#define laddr(x) (uint8_t *)(intptr_t)(x)
#endif
#define ldub_raw(p) ldub_p(laddr((p)))
#define ldsb_raw(p) ldsb_p(laddr((p)))
#define lduw_raw(p) lduw_p(laddr((p)))
#define ldsw_raw(p) ldsw_p(laddr((p)))
#define ldl_raw(p) ldl_p(laddr((p)))
#define ldq_raw(p) ldq_p(laddr((p)))
#define ldfl_raw(p) ldfl_p(laddr((p)))
#define ldfq_raw(p) ldfq_p(laddr((p)))
#define stb_raw(p, v) stb_p(saddr((p)), v)
#define stw_raw(p, v) stw_p(saddr((p)), v)
#define stl_raw(p, v) stl_p(saddr((p)), v)
#define stq_raw(p, v) stq_p(saddr((p)), v)
#define stfl_raw(p, v) stfl_p(saddr((p)), v)
#define stfq_raw(p, v) stfq_p(saddr((p)), v)
#if defined(CONFIG_USER_ONLY)
/* if user mode, no other memory access functions */
#define ldub(p) ldub_raw(p)
#define ldsb(p) ldsb_raw(p)
#define lduw(p) lduw_raw(p)
#define ldsw(p) ldsw_raw(p)
#define ldl(p) ldl_raw(p)
#define ldq(p) ldq_raw(p)
#define ldfl(p) ldfl_raw(p)
#define ldfq(p) ldfq_raw(p)
#define stb(p, v) stb_raw(p, v)
#define stw(p, v) stw_raw(p, v)
#define stl(p, v) stl_raw(p, v)
#define stq(p, v) stq_raw(p, v)
#define stfl(p, v) stfl_raw(p, v)
#define stfq(p, v) stfq_raw(p, v)
/* In user-only mode we provide only the _code and _data accessors. */
#define cpu_ldub_code(env1, p) ldub_raw(p)
#define cpu_ldsb_code(env1, p) ldsb_raw(p)
#define cpu_lduw_code(env1, p) lduw_raw(p)
#define cpu_ldsw_code(env1, p) ldsw_raw(p)
#define cpu_ldl_code(env1, p) ldl_raw(p)
#define cpu_ldq_code(env1, p) ldq_raw(p)
#define MEMSUFFIX _data
#define DATA_SIZE 1
#include "exec/cpu_ldst_useronly_template.h"
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldsw_data(env, addr) ldsw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define cpu_ldq_data(env, addr) ldq_raw(addr)
#define DATA_SIZE 2
#include "exec/cpu_ldst_useronly_template.h"
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#define cpu_stq_data(env, addr, data) stq_raw(addr, data)
#define DATA_SIZE 4
#include "exec/cpu_ldst_useronly_template.h"
#define cpu_ldub_kernel(env, addr) ldub_raw(addr)
#define cpu_lduw_kernel(env, addr) lduw_raw(addr)
#define cpu_ldsw_kernel(env, addr) ldsw_raw(addr)
#define cpu_ldl_kernel(env, addr) ldl_raw(addr)
#define cpu_ldq_kernel(env, addr) ldq_raw(addr)
#define DATA_SIZE 8
#include "exec/cpu_ldst_useronly_template.h"
#undef MEMSUFFIX
#define cpu_stb_kernel(env, addr, data) stb_raw(addr, data)
#define cpu_stw_kernel(env, addr, data) stw_raw(addr, data)
#define cpu_stl_kernel(env, addr, data) stl_raw(addr, data)
#define cpu_stq_kernel(env, addr, data) stq_raw(addr, data)
#define MEMSUFFIX _code
#define CODE_ACCESS
#define DATA_SIZE 1
#include "exec/cpu_ldst_useronly_template.h"
#define ldub_kernel(p) ldub_raw(p)
#define ldsb_kernel(p) ldsb_raw(p)
#define lduw_kernel(p) lduw_raw(p)
#define ldsw_kernel(p) ldsw_raw(p)
#define ldl_kernel(p) ldl_raw(p)
#define ldq_kernel(p) ldq_raw(p)
#define ldfl_kernel(p) ldfl_raw(p)
#define ldfq_kernel(p) ldfq_raw(p)
#define stb_kernel(p, v) stb_raw(p, v)
#define stw_kernel(p, v) stw_raw(p, v)
#define stl_kernel(p, v) stl_raw(p, v)
#define stq_kernel(p, v) stq_raw(p, v)
#define stfl_kernel(p, v) stfl_raw(p, v)
#define stfq_kernel(p, vt) stfq_raw(p, v)
#define DATA_SIZE 2
#include "exec/cpu_ldst_useronly_template.h"
#define cpu_ldub_data(env, addr) ldub_raw(addr)
#define cpu_lduw_data(env, addr) lduw_raw(addr)
#define cpu_ldl_data(env, addr) ldl_raw(addr)
#define DATA_SIZE 4
#include "exec/cpu_ldst_useronly_template.h"
#define cpu_stb_data(env, addr, data) stb_raw(addr, data)
#define cpu_stw_data(env, addr, data) stw_raw(addr, data)
#define cpu_stl_data(env, addr, data) stl_raw(addr, data)
#define DATA_SIZE 8
#include "exec/cpu_ldst_useronly_template.h"
#undef MEMSUFFIX
#undef CODE_ACCESS
#else
/* XXX: find something cleaner.
* Furthermore, this is false for 64 bits targets
*/
#define ldul_user ldl_user
#define ldul_kernel ldl_kernel
#define ldul_hypv ldl_hypv
#define ldul_executive ldl_executive
#define ldul_supervisor ldl_supervisor
/* The memory helpers for tcg-generated code need tcg_target_long etc. */
#include "tcg.h"
@@ -182,6 +132,7 @@ uint16_t helper_ldw_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint32_t helper_ldl_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#ifdef MMU_MODE0_SUFFIX
#define CPU_MMU_INDEX 0
#define MEMSUFFIX MMU_MODE0_SUFFIX
#define DATA_SIZE 1
@@ -197,7 +148,9 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif
#if (NB_MMU_MODES >= 2) && defined(MMU_MODE1_SUFFIX)
#define CPU_MMU_INDEX 1
#define MEMSUFFIX MMU_MODE1_SUFFIX
#define DATA_SIZE 1
@@ -213,8 +166,9 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#include "exec/cpu_ldst_template.h"
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#endif
#if (NB_MMU_MODES >= 3)
#if (NB_MMU_MODES >= 3) && defined(MMU_MODE2_SUFFIX)
#define CPU_MMU_INDEX 2
#define MEMSUFFIX MMU_MODE2_SUFFIX
@@ -233,7 +187,7 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 3) */
#if (NB_MMU_MODES >= 4)
#if (NB_MMU_MODES >= 4) && defined(MMU_MODE3_SUFFIX)
#define CPU_MMU_INDEX 3
#define MEMSUFFIX MMU_MODE3_SUFFIX
@@ -252,7 +206,7 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 4) */
#if (NB_MMU_MODES >= 5)
#if (NB_MMU_MODES >= 5) && defined(MMU_MODE4_SUFFIX)
#define CPU_MMU_INDEX 4
#define MEMSUFFIX MMU_MODE4_SUFFIX
@@ -271,7 +225,7 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#undef MEMSUFFIX
#endif /* (NB_MMU_MODES >= 5) */
#if (NB_MMU_MODES >= 6)
#if (NB_MMU_MODES >= 6) && defined(MMU_MODE5_SUFFIX)
#define CPU_MMU_INDEX 5
#define MEMSUFFIX MMU_MODE5_SUFFIX
@@ -311,18 +265,6 @@ uint64_t helper_ldq_cmmu(CPUArchState *env, target_ulong addr, int mmu_idx);
#undef CPU_MMU_INDEX
#undef MEMSUFFIX
#define ldub(p) ldub_data(p)
#define ldsb(p) ldsb_data(p)
#define lduw(p) lduw_data(p)
#define ldsw(p) ldsw_data(p)
#define ldl(p) ldl_data(p)
#define ldq(p) ldq_data(p)
#define stb(p, v) stb_data(p, v)
#define stw(p, v) stw_data(p, v)
#define stl(p, v) stl_data(p, v)
#define stq(p, v) stq_data(p, v)
#define CPU_MMU_INDEX (cpu_mmu_index(env))
#define MEMSUFFIX _code
#define SOFTMMU_CODE_ACCESS

View File

@@ -4,9 +4,7 @@
* Generate inline load/store functions for one MMU mode and data
* size.
*
* Generate a store function as well as signed and unsigned loads. For
* 32 and 64 bit cases, also generate floating point functions with
* the same size.
* Generate a store function as well as signed and unsigned loads.
*
* Not used directly but included from cpu_ldst.h.
*
@@ -79,7 +77,7 @@ glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
res = glue(glue(helper_ld, SUFFIX), MMUSUFFIX)(env, addr, mmu_idx);
} else {
uintptr_t hostaddr = addr + env->tlb_table[mmu_idx][page_index].addend;
res = glue(glue(ld, USUFFIX), _raw)(hostaddr);
res = glue(glue(ld, USUFFIX), _p)((uint8_t *)hostaddr);
}
return res;
}
@@ -101,7 +99,7 @@ glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
MMUSUFFIX)(env, addr, mmu_idx);
} else {
uintptr_t hostaddr = addr + env->tlb_table[mmu_idx][page_index].addend;
res = glue(glue(lds, SUFFIX), _raw)(hostaddr);
res = glue(glue(lds, SUFFIX), _p)((uint8_t *)hostaddr);
}
return res;
}
@@ -127,60 +125,10 @@ glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
glue(glue(helper_st, SUFFIX), MMUSUFFIX)(env, addr, v, mmu_idx);
} else {
uintptr_t hostaddr = addr + env->tlb_table[mmu_idx][page_index].addend;
glue(glue(st, SUFFIX), _raw)(hostaddr, v);
glue(glue(st, SUFFIX), _p)((uint8_t *)hostaddr, v);
}
}
#if DATA_SIZE == 8
static inline float64 glue(cpu_ldfq, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr)
{
union {
float64 d;
uint64_t i;
} u;
u.i = glue(cpu_ldq, MEMSUFFIX)(env, ptr);
return u.d;
}
static inline void glue(cpu_stfq, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr, float64 v)
{
union {
float64 d;
uint64_t i;
} u;
u.d = v;
glue(cpu_stq, MEMSUFFIX)(env, ptr, u.i);
}
#endif /* DATA_SIZE == 8 */
#if DATA_SIZE == 4
static inline float32 glue(cpu_ldfl, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr)
{
union {
float32 f;
uint32_t i;
} u;
u.i = glue(cpu_ldl, MEMSUFFIX)(env, ptr);
return u.f;
}
static inline void glue(cpu_stfl, MEMSUFFIX)(CPUArchState *env,
target_ulong ptr, float32 v)
{
union {
float32 f;
uint32_t i;
} u;
u.f = v;
glue(cpu_stl, MEMSUFFIX)(env, ptr, u.i);
}
#endif /* DATA_SIZE == 4 */
#endif /* !SOFTMMU_CODE_ACCESS */
#undef RES_TYPE

View File

@@ -0,0 +1,81 @@
/*
* User-only accessor function support
*
* Generate inline load/store functions for one data size.
*
* Generate a store function as well as signed and unsigned loads.
*
* Not used directly but included from cpu_ldst.h.
*
* Copyright (c) 2015 Linaro Limited
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, see <http://www.gnu.org/licenses/>.
*/
#if DATA_SIZE == 8
#define SUFFIX q
#define USUFFIX q
#define DATA_TYPE uint64_t
#elif DATA_SIZE == 4
#define SUFFIX l
#define USUFFIX l
#define DATA_TYPE uint32_t
#elif DATA_SIZE == 2
#define SUFFIX w
#define USUFFIX uw
#define DATA_TYPE uint16_t
#define DATA_STYPE int16_t
#elif DATA_SIZE == 1
#define SUFFIX b
#define USUFFIX ub
#define DATA_TYPE uint8_t
#define DATA_STYPE int8_t
#else
#error unsupported data size
#endif
#if DATA_SIZE == 8
#define RES_TYPE uint64_t
#else
#define RES_TYPE uint32_t
#endif
static inline RES_TYPE
glue(glue(cpu_ld, USUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
{
return glue(glue(ld, USUFFIX), _p)(g2h(ptr));
}
#if DATA_SIZE <= 2
static inline int
glue(glue(cpu_lds, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr)
{
return glue(glue(lds, SUFFIX), _p)(g2h(ptr));
}
#endif
#ifndef CODE_ACCESS
static inline void
glue(glue(cpu_st, SUFFIX), MEMSUFFIX)(CPUArchState *env, target_ulong ptr,
RES_TYPE v)
{
glue(glue(st, SUFFIX), _p)(g2h(ptr), v);
}
#endif
#undef RES_TYPE
#undef DATA_TYPE
#undef DATA_STYPE
#undef SUFFIX
#undef USUFFIX
#undef DATA_SIZE

View File

@@ -165,6 +165,12 @@ struct DeviceState {
int alias_required_for_version;
};
struct DeviceListener {
void (*realize)(DeviceListener *listener, DeviceState *dev);
void (*unrealize)(DeviceListener *listener, DeviceState *dev);
QTAILQ_ENTRY(DeviceListener) link;
};
#define TYPE_BUS "bus"
#define BUS(obj) OBJECT_CHECK(BusState, (obj), TYPE_BUS)
#define BUS_CLASS(klass) OBJECT_CLASS_CHECK(BusClass, (klass), TYPE_BUS)
@@ -376,4 +382,8 @@ static inline bool qbus_is_hotpluggable(BusState *bus)
{
return bus->hotplug_handler;
}
void device_listener_register(DeviceListener *listener);
void device_listener_unregister(DeviceListener *listener);
#endif

View File

@@ -16,7 +16,9 @@
#include "hw/hw.h"
#include "hw/xen/xen.h"
#include "hw/pci/pci.h"
#include "qemu/queue.h"
#include "trace.h"
/*
* We don't support Xen prior to 3.3.0.
@@ -179,4 +181,225 @@ static inline int xen_get_vmport_regs_pfn(XenXC xc, domid_t dom,
}
#endif
/* Xen before 4.5 */
#if CONFIG_XEN_CTRL_INTERFACE_VERSION < 450
#ifndef HVM_PARAM_BUFIOREQ_EVTCHN
#define HVM_PARAM_BUFIOREQ_EVTCHN 26
#endif
#define IOREQ_TYPE_PCI_CONFIG 2
typedef uint32_t ioservid_t;
static inline void xen_map_memory_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
}
static inline void xen_unmap_memory_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
}
static inline void xen_map_io_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
}
static inline void xen_unmap_io_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
}
static inline void xen_map_pcidev(XenXC xc, domid_t dom,
ioservid_t ioservid,
PCIDevice *pci_dev)
{
}
static inline void xen_unmap_pcidev(XenXC xc, domid_t dom,
ioservid_t ioservid,
PCIDevice *pci_dev)
{
}
static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
ioservid_t *ioservid)
{
return 0;
}
static inline void xen_destroy_ioreq_server(XenXC xc, domid_t dom,
ioservid_t ioservid)
{
}
static inline int xen_get_ioreq_server_info(XenXC xc, domid_t dom,
ioservid_t ioservid,
xen_pfn_t *ioreq_pfn,
xen_pfn_t *bufioreq_pfn,
evtchn_port_t *bufioreq_evtchn)
{
unsigned long param;
int rc;
rc = xc_get_hvm_param(xc, dom, HVM_PARAM_IOREQ_PFN, &param);
if (rc < 0) {
fprintf(stderr, "failed to get HVM_PARAM_IOREQ_PFN\n");
return -1;
}
*ioreq_pfn = param;
rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_PFN, &param);
if (rc < 0) {
fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_PFN\n");
return -1;
}
*bufioreq_pfn = param;
rc = xc_get_hvm_param(xc, dom, HVM_PARAM_BUFIOREQ_EVTCHN,
&param);
if (rc < 0) {
fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
return -1;
}
*bufioreq_evtchn = param;
return 0;
}
static inline int xen_set_ioreq_server_state(XenXC xc, domid_t dom,
ioservid_t ioservid,
bool enable)
{
return 0;
}
/* Xen 4.5 */
#else
static inline void xen_map_memory_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
hwaddr start_addr = section->offset_within_address_space;
ram_addr_t size = int128_get64(section->size);
hwaddr end_addr = start_addr + size - 1;
trace_xen_map_mmio_range(ioservid, start_addr, end_addr);
xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 1,
start_addr, end_addr);
}
static inline void xen_unmap_memory_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
hwaddr start_addr = section->offset_within_address_space;
ram_addr_t size = int128_get64(section->size);
hwaddr end_addr = start_addr + size - 1;
trace_xen_unmap_mmio_range(ioservid, start_addr, end_addr);
xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 1,
start_addr, end_addr);
}
static inline void xen_map_io_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
hwaddr start_addr = section->offset_within_address_space;
ram_addr_t size = int128_get64(section->size);
hwaddr end_addr = start_addr + size - 1;
trace_xen_map_portio_range(ioservid, start_addr, end_addr);
xc_hvm_map_io_range_to_ioreq_server(xc, dom, ioservid, 0,
start_addr, end_addr);
}
static inline void xen_unmap_io_section(XenXC xc, domid_t dom,
ioservid_t ioservid,
MemoryRegionSection *section)
{
hwaddr start_addr = section->offset_within_address_space;
ram_addr_t size = int128_get64(section->size);
hwaddr end_addr = start_addr + size - 1;
trace_xen_unmap_portio_range(ioservid, start_addr, end_addr);
xc_hvm_unmap_io_range_from_ioreq_server(xc, dom, ioservid, 0,
start_addr, end_addr);
}
static inline void xen_map_pcidev(XenXC xc, domid_t dom,
ioservid_t ioservid,
PCIDevice *pci_dev)
{
trace_xen_map_pcidev(ioservid, pci_bus_num(pci_dev->bus),
PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
xc_hvm_map_pcidev_to_ioreq_server(xc, dom, ioservid,
0, pci_bus_num(pci_dev->bus),
PCI_SLOT(pci_dev->devfn),
PCI_FUNC(pci_dev->devfn));
}
static inline void xen_unmap_pcidev(XenXC xc, domid_t dom,
ioservid_t ioservid,
PCIDevice *pci_dev)
{
trace_xen_unmap_pcidev(ioservid, pci_bus_num(pci_dev->bus),
PCI_SLOT(pci_dev->devfn), PCI_FUNC(pci_dev->devfn));
xc_hvm_unmap_pcidev_from_ioreq_server(xc, dom, ioservid,
0, pci_bus_num(pci_dev->bus),
PCI_SLOT(pci_dev->devfn),
PCI_FUNC(pci_dev->devfn));
}
static inline int xen_create_ioreq_server(XenXC xc, domid_t dom,
ioservid_t *ioservid)
{
int rc = xc_hvm_create_ioreq_server(xc, dom, 1, ioservid);
if (rc == 0) {
trace_xen_ioreq_server_create(*ioservid);
}
return rc;
}
static inline void xen_destroy_ioreq_server(XenXC xc, domid_t dom,
ioservid_t ioservid)
{
trace_xen_ioreq_server_destroy(ioservid);
xc_hvm_destroy_ioreq_server(xc, dom, ioservid);
}
static inline int xen_get_ioreq_server_info(XenXC xc, domid_t dom,
ioservid_t ioservid,
xen_pfn_t *ioreq_pfn,
xen_pfn_t *bufioreq_pfn,
evtchn_port_t *bufioreq_evtchn)
{
return xc_hvm_get_ioreq_server_info(xc, dom, ioservid,
ioreq_pfn, bufioreq_pfn,
bufioreq_evtchn);
}
static inline int xen_set_ioreq_server_state(XenXC xc, domid_t dom,
ioservid_t ioservid,
bool enable)
{
trace_xen_ioreq_server_state(ioservid, enable);
return xc_hvm_set_ioreq_server_state(xc, dom, ioservid, enable);
}
#endif
#endif /* QEMU_HW_XEN_COMMON_H */

View File

@@ -204,7 +204,7 @@ typedef union {
* f : float access
*
* sign is:
* (empty): for floats or 32 bit size
* (empty): for 32 or 64 bit sizes (including floats and doubles)
* u : unsigned
* s : signed
*
@@ -218,7 +218,16 @@ typedef union {
* he : host endian
* be : big endian
* le : little endian
* te : target endian
* (except for byte accesses, which have no endian infix).
*
* The target endian accessors are obviously only available to source
* files which are built per-target; they are defined in cpu-all.h.
*
* In all cases these functions take a host pointer.
* For accessors that take a guest address rather than a
* host address, see the cpu_{ld,st}_* accessors defined in
* cpu_ldst.h.
*/
static inline int ldub_p(const void *ptr)

View File

@@ -17,6 +17,7 @@ typedef struct BusState BusState;
typedef struct CharDriverState CharDriverState;
typedef struct CompatProperty CompatProperty;
typedef struct DeviceState DeviceState;
typedef struct DeviceListener DeviceListener;
typedef struct DisplayChangeListener DisplayChangeListener;
typedef struct DisplayState DisplayState;
typedef struct DisplaySurface DisplaySurface;

View File

@@ -829,8 +829,11 @@ static inline void init_thread(struct target_pt_regs *_regs, struct image_info *
_regs->gpr[1] = infop->start_stack;
#if defined(TARGET_PPC64) && !defined(TARGET_ABI32)
if (get_ppc64_abi(infop) < 2) {
_regs->gpr[2] = ldq_raw(infop->entry + 8) + infop->load_bias;
infop->entry = ldq_raw(infop->entry) + infop->load_bias;
uint64_t val;
get_user_u64(val, infop->entry + 8);
_regs->gpr[2] = val + infop->load_bias;
get_user_u64(val, infop->entry);
infop->entry = val + infop->load_bias;
} else {
_regs->gpr[12] = infop->entry; /* r12 set to global entry address */
}

View File

@@ -2972,7 +2972,7 @@ void cpu_loop(CPUM68KState *env)
{
if (ts->sim_syscalls) {
uint16_t nr;
nr = lduw(env->pc + 2);
get_user_u16(nr, env->pc + 2);
env->pc += 4;
do_m68k_simcall(env, nr);
} else {
@@ -3436,10 +3436,8 @@ CPUArchState *cpu_copy(CPUArchState *env)
CPUState *cpu = ENV_GET_CPU(env);
CPUArchState *new_env = cpu_init(cpu_model);
CPUState *new_cpu = ENV_GET_CPU(new_env);
#if defined(TARGET_HAS_ICE)
CPUBreakpoint *bp;
CPUWatchpoint *wp;
#endif
/* Reset non arch specific state */
cpu_reset(new_cpu);
@@ -3451,14 +3449,12 @@ CPUArchState *cpu_copy(CPUArchState *env)
BP_CPU break/watchpoints are handled correctly on clone. */
QTAILQ_INIT(&cpu->breakpoints);
QTAILQ_INIT(&cpu->watchpoints);
#if defined(TARGET_HAS_ICE)
QTAILQ_FOREACH(bp, &cpu->breakpoints, entry) {
cpu_breakpoint_insert(new_cpu, bp->pc, bp->flags, NULL);
}
QTAILQ_FOREACH(wp, &cpu->watchpoints, entry) {
cpu_watchpoint_insert(new_cpu, wp->vaddr, wp->len, wp->flags, NULL);
}
#endif
return new_env;
}

View File

@@ -45,29 +45,34 @@ static inline int is_revectored(int nr, struct target_revectored_struct *bitmap)
return (((uint8_t *)bitmap)[nr >> 3] >> (nr & 7)) & 1;
}
static inline void vm_putw(uint32_t segptr, unsigned int reg16, unsigned int val)
static inline void vm_putw(CPUX86State *env, uint32_t segptr,
unsigned int reg16, unsigned int val)
{
stw(segptr + (reg16 & 0xffff), val);
cpu_stw_data(env, segptr + (reg16 & 0xffff), val);
}
static inline void vm_putl(uint32_t segptr, unsigned int reg16, unsigned int val)
static inline void vm_putl(CPUX86State *env, uint32_t segptr,
unsigned int reg16, unsigned int val)
{
stl(segptr + (reg16 & 0xffff), val);
cpu_stl_data(env, segptr + (reg16 & 0xffff), val);
}
static inline unsigned int vm_getb(uint32_t segptr, unsigned int reg16)
static inline unsigned int vm_getb(CPUX86State *env,
uint32_t segptr, unsigned int reg16)
{
return ldub(segptr + (reg16 & 0xffff));
return cpu_ldub_data(env, segptr + (reg16 & 0xffff));
}
static inline unsigned int vm_getw(uint32_t segptr, unsigned int reg16)
static inline unsigned int vm_getw(CPUX86State *env,
uint32_t segptr, unsigned int reg16)
{
return lduw(segptr + (reg16 & 0xffff));
return cpu_lduw_data(env, segptr + (reg16 & 0xffff));
}
static inline unsigned int vm_getl(uint32_t segptr, unsigned int reg16)
static inline unsigned int vm_getl(CPUX86State *env,
uint32_t segptr, unsigned int reg16)
{
return ldl(segptr + (reg16 & 0xffff));
return cpu_ldl_data(env, segptr + (reg16 & 0xffff));
}
void save_v86_state(CPUX86State *env)
@@ -221,7 +226,7 @@ static void do_int(CPUX86State *env, int intno)
&ts->vm86plus.int21_revectored))
goto cannot_handle;
int_addr = (intno << 2);
segoffs = ldl(int_addr);
segoffs = cpu_ldl_data(env, int_addr);
if ((segoffs >> 16) == TARGET_BIOSSEG)
goto cannot_handle;
LOG_VM86("VM86: emulating int 0x%x. CS:IP=%04x:%04x\n",
@@ -229,9 +234,9 @@ static void do_int(CPUX86State *env, int intno)
/* save old state */
ssp = env->segs[R_SS].selector << 4;
sp = env->regs[R_ESP] & 0xffff;
vm_putw(ssp, sp - 2, get_vflags(env));
vm_putw(ssp, sp - 4, env->segs[R_CS].selector);
vm_putw(ssp, sp - 6, env->eip);
vm_putw(env, ssp, sp - 2, get_vflags(env));
vm_putw(env, ssp, sp - 4, env->segs[R_CS].selector);
vm_putw(env, ssp, sp - 6, env->eip);
ADD16(env->regs[R_ESP], -6);
/* goto interrupt handler */
env->eip = segoffs & 0xffff;
@@ -285,7 +290,7 @@ void handle_vm86_fault(CPUX86State *env)
data32 = 0;
pref_done = 0;
do {
opcode = vm_getb(csp, ip);
opcode = vm_getb(env, csp, ip);
ADD16(ip, 1);
switch (opcode) {
case 0x66: /* 32-bit data */ data32=1; break;
@@ -306,10 +311,10 @@ void handle_vm86_fault(CPUX86State *env)
switch(opcode) {
case 0x9c: /* pushf */
if (data32) {
vm_putl(ssp, sp - 4, get_vflags(env));
vm_putl(env, ssp, sp - 4, get_vflags(env));
ADD16(env->regs[R_ESP], -4);
} else {
vm_putw(ssp, sp - 2, get_vflags(env));
vm_putw(env, ssp, sp - 2, get_vflags(env));
ADD16(env->regs[R_ESP], -2);
}
env->eip = ip;
@@ -317,10 +322,10 @@ void handle_vm86_fault(CPUX86State *env)
case 0x9d: /* popf */
if (data32) {
newflags = vm_getl(ssp, sp);
newflags = vm_getl(env, ssp, sp);
ADD16(env->regs[R_ESP], 4);
} else {
newflags = vm_getw(ssp, sp);
newflags = vm_getw(env, ssp, sp);
ADD16(env->regs[R_ESP], 2);
}
env->eip = ip;
@@ -335,7 +340,7 @@ void handle_vm86_fault(CPUX86State *env)
VM86_FAULT_RETURN;
case 0xcd: /* int */
intno = vm_getb(csp, ip);
intno = vm_getb(env, csp, ip);
ADD16(ip, 1);
env->eip = ip;
if (ts->vm86plus.vm86plus.flags & TARGET_vm86dbg_active) {
@@ -350,14 +355,14 @@ void handle_vm86_fault(CPUX86State *env)
case 0xcf: /* iret */
if (data32) {
newip = vm_getl(ssp, sp) & 0xffff;
newcs = vm_getl(ssp, sp + 4) & 0xffff;
newflags = vm_getl(ssp, sp + 8);
newip = vm_getl(env, ssp, sp) & 0xffff;
newcs = vm_getl(env, ssp, sp + 4) & 0xffff;
newflags = vm_getl(env, ssp, sp + 8);
ADD16(env->regs[R_ESP], 12);
} else {
newip = vm_getw(ssp, sp);
newcs = vm_getw(ssp, sp + 2);
newflags = vm_getw(ssp, sp + 4);
newip = vm_getw(env, ssp, sp);
newcs = vm_getw(env, ssp, sp + 2);
newflags = vm_getw(env, ssp, sp + 4);
ADD16(env->regs[R_ESP], 6);
}
env->eip = newip;

View File

@@ -1292,16 +1292,16 @@ static void memory_dump(Monitor *mon, int count, int format, int wsize,
switch(wsize) {
default:
case 1:
v = ldub_raw(buf + i);
v = ldub_p(buf + i);
break;
case 2:
v = lduw_raw(buf + i);
v = lduw_p(buf + i);
break;
case 4:
v = (uint32_t)ldl_raw(buf + i);
v = (uint32_t)ldl_p(buf + i);
break;
case 8:
v = ldq_raw(buf + i);
v = ldq_p(buf + i);
break;
}
monitor_printf(mon, " ");

View File

@@ -3258,6 +3258,18 @@
# Send input event(s) to guest.
#
# @console: #optional console to send event(s) to.
# This parameter can be used to send the input event to
# specific input devices in case (a) multiple input devices
# of the same kind are added to the virtual machine and (b)
# you have configured input routing (see docs/multiseat.txt)
# for those input devices. If input routing is not
# configured this parameter has no effect.
# If @console is missing, only devices that aren't associated
# with a console are admissible.
# If @console is specified, it must exist, and both devices
# associated with that console and devices not associated with a
# console are admissible, but the former take precedence.
#
# @events: List of InputEvent union.
#

View File

@@ -99,6 +99,14 @@ struct %(name)s
ret += generate_struct_fields(members)
# Make sure that all structs have at least one field; this avoids
# potential issues with attempting to malloc space for zero-length structs
# in C, and also incompatibility with C++ (where an empty struct is size 1).
if not base and not members:
ret += mcgen('''
char qapi_dummy_field_for_empty_struct;
''')
if len(fieldname):
fieldname = " " + fieldname
ret += mcgen('''

View File

@@ -32,8 +32,6 @@
#include "fpu/softfloat.h"
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_ALPHA
#define ICACHE_LINE_SIZE 32

View File

@@ -39,8 +39,6 @@
#include "fpu/softfloat.h"
#define TARGET_HAS_ICE 1
#define EXCP_UDEF 1 /* undefined instruction */
#define EXCP_SWI 2 /* software interrupt */
#define EXCP_PREFETCH_ABORT 3

View File

@@ -29,8 +29,6 @@
#include "exec/cpu-defs.h"
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_CRIS
#define EXCP_NMI 1

View File

@@ -37,8 +37,6 @@
close to the modifying instruction */
#define TARGET_HAS_PRECISE_SMC
#define TARGET_HAS_ICE 1
#ifdef TARGET_X86_64
#define ELF_MACHINE EM_X86_64
#define ELF_MACHINE_UNAME "x86_64"

View File

@@ -34,7 +34,21 @@
# define LOG_PCALL_STATE(cpu) do { } while (0)
#endif
#ifndef CONFIG_USER_ONLY
#ifdef CONFIG_USER_ONLY
#define MEMSUFFIX _kernel
#define DATA_SIZE 1
#include "exec/cpu_ldst_useronly_template.h"
#define DATA_SIZE 2
#include "exec/cpu_ldst_useronly_template.h"
#define DATA_SIZE 4
#include "exec/cpu_ldst_useronly_template.h"
#define DATA_SIZE 8
#include "exec/cpu_ldst_useronly_template.h"
#undef MEMSUFFIX
#else
#define CPU_MMU_INDEX (cpu_mmu_index_kernel(env))
#define MEMSUFFIX _kernel
#define DATA_SIZE 1

View File

@@ -30,8 +30,6 @@
struct CPULM32State;
typedef struct CPULM32State CPULM32State;
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_LATTICEMICO32
#define NB_MMU_MODES 1

View File

@@ -32,8 +32,6 @@
#define MAX_QREGS 32
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_68K
#define EXCP_ACCESS 2 /* Access (MMU) error. */

View File

@@ -34,8 +34,6 @@ typedef struct CPUMBState CPUMBState;
#include "mmu.h"
#endif
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_MICROBLAZE
#define EXCP_NMI 1

View File

@@ -4,7 +4,6 @@
//#define DEBUG_OP
#define ALIGNED_ONLY
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_MIPS

View File

@@ -74,7 +74,7 @@ void helper_raise_exception(CPUMIPSState *env, uint32_t exception)
static inline type do_##name(CPUMIPSState *env, target_ulong addr, \
int mem_idx) \
{ \
return (type) insn##_raw(addr); \
return (type) cpu_##insn##_data(env, addr); \
}
#else
#define HELPER_LD(name, insn, type) \
@@ -101,7 +101,7 @@ HELPER_LD(ld, ldq, int64_t)
static inline void do_##name(CPUMIPSState *env, target_ulong addr, \
type val, int mem_idx) \
{ \
insn##_raw(addr, val); \
cpu_##insn##_data(env, addr, val); \
}
#else
#define HELPER_ST(name, insn, type) \

View File

@@ -26,8 +26,6 @@
#define CPUArchState struct CPUMoxieState
#define TARGET_HAS_ICE 1
#define ELF_MACHINE 0xFEED /* EM_MOXIE */
#define MOXIE_EX_DIV0 0

View File

@@ -79,8 +79,6 @@
#include "fpu/softfloat.h"
#define TARGET_HAS_ICE 1
#if defined (TARGET_PPC64)
#define ELF_MACHINE EM_PPC64
#else

View File

@@ -886,8 +886,6 @@ int sclp_service_call(CPUS390XState *env, uint64_t sccb, uint32_t code);
uint32_t calc_cc(CPUS390XState *env, uint32_t cc_op, uint64_t src, uint64_t dst,
uint64_t vr);
#define TARGET_HAS_ICE 1
/* The value of the TOD clock for 1.1.1970. */
#define TOD_UNIX_EPOCH 0x7d91048bca000000ULL

View File

@@ -23,7 +23,6 @@
#include "qemu-common.h"
#define TARGET_LONG_BITS 32
#define TARGET_HAS_ICE 1
#define ELF_MACHINE EM_SH

View File

@@ -31,8 +31,6 @@
#include "fpu/softfloat.h"
#define TARGET_HAS_ICE 1
#if !defined(TARGET_SPARC64)
#define ELF_MACHINE EM_SPARC
#else

View File

@@ -1122,17 +1122,17 @@ uint64_t helper_ld_asi(CPUSPARCState *env, target_ulong addr, int asi, int size,
{
switch (size) {
case 1:
ret = ldub_raw(addr);
ret = cpu_ldub_data(env, addr);
break;
case 2:
ret = lduw_raw(addr);
ret = cpu_lduw_data(env, addr);
break;
case 4:
ret = ldl_raw(addr);
ret = cpu_ldl_data(env, addr);
break;
default:
case 8:
ret = ldq_raw(addr);
ret = cpu_ldq_data(env, addr);
break;
}
}
@@ -1239,17 +1239,17 @@ void helper_st_asi(CPUSPARCState *env, target_ulong addr, target_ulong val,
{
switch (size) {
case 1:
stb_raw(addr, val);
cpu_stb_data(env, addr, val);
break;
case 2:
stw_raw(addr, val);
cpu_stw_data(env, addr, val);
break;
case 4:
stl_raw(addr, val);
cpu_stl_data(env, addr, val);
break;
case 8:
default:
stq_raw(addr, val);
cpu_stq_data(env, addr, val);
break;
}
}
@@ -2289,8 +2289,8 @@ void helper_ldqf(CPUSPARCState *env, target_ulong addr, int mem_idx)
break;
}
#else
u.ll.upper = ldq_raw(address_mask(env, addr));
u.ll.lower = ldq_raw(address_mask(env, addr + 8));
u.ll.upper = cpu_ldq_data(env, address_mask(env, addr));
u.ll.lower = cpu_ldq_data(env, address_mask(env, addr + 8));
QT0 = u.q;
#endif
}
@@ -2326,8 +2326,8 @@ void helper_stqf(CPUSPARCState *env, target_ulong addr, int mem_idx)
}
#else
u.q = QT0;
stq_raw(address_mask(env, addr), u.ll.upper);
stq_raw(address_mask(env, addr + 8), u.ll.lower);
cpu_stq_data(env, address_mask(env, addr), u.ll.upper);
cpu_stq_data(env, address_mask(env, addr + 8), u.ll.lower);
#endif
}

View File

@@ -39,8 +39,6 @@
#include "exec/cpu-defs.h"
#include "fpu/softfloat.h"
#define TARGET_HAS_ICE 1
#define NB_MMU_MODES 4
#define TARGET_PHYS_ADDR_SPACE_BITS 32

View File

@@ -897,6 +897,15 @@ pvscsi_tx_rings_num_pages(const char* label, uint32_t num) "Number of %s pages:
# xen-hvm.c
xen_ram_alloc(unsigned long ram_addr, unsigned long size) "requested: %#lx, size %#lx"
xen_client_set_memory(uint64_t start_addr, unsigned long size, bool log_dirty) "%#"PRIx64" size %#lx, log_dirty %i"
xen_ioreq_server_create(uint32_t id) "id: %u"
xen_ioreq_server_destroy(uint32_t id) "id: %u"
xen_ioreq_server_state(uint32_t id, bool enable) "id: %u: enable: %i"
xen_map_mmio_range(uint32_t id, uint64_t start_addr, uint64_t end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
xen_unmap_mmio_range(uint32_t id, uint64_t start_addr, uint64_t end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
xen_map_portio_range(uint32_t id, uint64_t start_addr, uint64_t end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
xen_unmap_portio_range(uint32_t id, uint64_t start_addr, uint64_t end_addr) "id: %u start: %#"PRIx64" end: %#"PRIx64
xen_map_pcidev(uint32_t id, uint8_t bus, uint8_t dev, uint8_t func) "id: %u bdf: %02x.%02x.%02x"
xen_unmap_pcidev(uint32_t id, uint8_t bus, uint8_t dev, uint8_t func) "id: %u bdf: %02x.%02x.%02x"
# xen-mapcache.c
xen_map_cache(uint64_t phys_addr) "want %#"PRIx64

View File

@@ -1451,7 +1451,7 @@ static TranslationBlock *tb_find_pc(uintptr_t tc_ptr)
return &tcg_ctx.tb_ctx.tbs[m_max];
}
#if defined(TARGET_HAS_ICE) && !defined(CONFIG_USER_ONLY)
#if !defined(CONFIG_USER_ONLY)
void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
{
ram_addr_t ram_addr;
@@ -1467,7 +1467,7 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
+ addr;
tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0);
}
#endif /* TARGET_HAS_ICE && !defined(CONFIG_USER_ONLY) */
#endif /* !defined(CONFIG_USER_ONLY) */
void tb_check_watchpoint(CPUState *cpu)
{

160
xen-hvm.c
View File

@@ -85,9 +85,6 @@ static inline ioreq_t *xen_vcpu_ioreq(shared_iopage_t *shared_page, int vcpu)
}
# define FMT_ioreq_size "u"
#endif
#ifndef HVM_PARAM_BUFIOREQ_EVTCHN
#define HVM_PARAM_BUFIOREQ_EVTCHN 26
#endif
#define BUFFER_IO_MAX_DELAY 100
/* Leave some slack so that hvmloader does not complain about lack of
@@ -107,6 +104,7 @@ typedef struct XenPhysmap {
} XenPhysmap;
typedef struct XenIOState {
ioservid_t ioservid;
shared_iopage_t *shared_page;
shared_vmport_iopage_t *shared_vmport_page;
buffered_iopage_t *buffered_io_page;
@@ -123,6 +121,8 @@ typedef struct XenIOState {
struct xs_handle *xenstore;
MemoryListener memory_listener;
MemoryListener io_listener;
DeviceListener device_listener;
QLIST_HEAD(, XenPhysmap) physmap;
hwaddr free_phys_offset;
const XenPhysmap *log_for_dirtybit;
@@ -491,12 +491,23 @@ static void xen_set_memory(struct MemoryListener *listener,
bool log_dirty = memory_region_is_logging(section->mr);
hvmmem_type_t mem_type;
if (section->mr == &ram_memory) {
return;
} else {
if (add) {
xen_map_memory_section(xen_xc, xen_domid, state->ioservid,
section);
} else {
xen_unmap_memory_section(xen_xc, xen_domid, state->ioservid,
section);
}
}
if (!memory_region_is_ram(section->mr)) {
return;
}
if (!(section->mr != &ram_memory
&& ( (log_dirty && add) || (!log_dirty && !add)))) {
if (log_dirty != add) {
return;
}
@@ -539,6 +550,50 @@ static void xen_region_del(MemoryListener *listener,
memory_region_unref(section->mr);
}
static void xen_io_add(MemoryListener *listener,
MemoryRegionSection *section)
{
XenIOState *state = container_of(listener, XenIOState, io_listener);
memory_region_ref(section->mr);
xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
}
static void xen_io_del(MemoryListener *listener,
MemoryRegionSection *section)
{
XenIOState *state = container_of(listener, XenIOState, io_listener);
xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);
memory_region_unref(section->mr);
}
static void xen_device_realize(DeviceListener *listener,
DeviceState *dev)
{
XenIOState *state = container_of(listener, XenIOState, device_listener);
if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
PCIDevice *pci_dev = PCI_DEVICE(dev);
xen_map_pcidev(xen_xc, xen_domid, state->ioservid, pci_dev);
}
}
static void xen_device_unrealize(DeviceListener *listener,
DeviceState *dev)
{
XenIOState *state = container_of(listener, XenIOState, device_listener);
if (object_dynamic_cast(OBJECT(dev), TYPE_PCI_DEVICE)) {
PCIDevice *pci_dev = PCI_DEVICE(dev);
xen_unmap_pcidev(xen_xc, xen_domid, state->ioservid, pci_dev);
}
}
static void xen_sync_dirty_bitmap(XenIOState *state,
hwaddr start_addr,
ram_addr_t size)
@@ -639,6 +694,17 @@ static MemoryListener xen_memory_listener = {
.priority = 10,
};
static MemoryListener xen_io_listener = {
.region_add = xen_io_add,
.region_del = xen_io_del,
.priority = 10,
};
static DeviceListener xen_device_listener = {
.realize = xen_device_realize,
.unrealize = xen_device_unrealize,
};
/* get the ioreq packets from share mem */
static ioreq_t *cpu_get_ioreq_from_shared_memory(XenIOState *state, int vcpu)
{
@@ -887,6 +953,27 @@ static void handle_ioreq(XenIOState *state, ioreq_t *req)
case IOREQ_TYPE_INVALIDATE:
xen_invalidate_map_cache();
break;
case IOREQ_TYPE_PCI_CONFIG: {
uint32_t sbdf = req->addr >> 32;
uint32_t val;
/* Fake a write to port 0xCF8 so that
* the config space access will target the
* correct device model.
*/
val = (1u << 31) |
((req->addr & 0x0f00) << 16) |
((sbdf & 0xffff) << 8) |
(req->addr & 0xfc);
do_outp(0xcf8, 4, val);
/* Now issue the config space access via
* port 0xCFC
*/
req->addr = 0xcfc | (req->addr & 0x03);
cpu_ioreq_pio(req);
break;
}
default:
hw_error("Invalid ioreq type 0x%x\n", req->type);
}
@@ -1017,9 +1104,15 @@ static void xen_main_loop_prepare(XenIOState *state)
static void xen_hvm_change_state_handler(void *opaque, int running,
RunState rstate)
{
XenIOState *state = opaque;
if (running) {
xen_main_loop_prepare((XenIOState *)opaque);
xen_main_loop_prepare(state);
}
xen_set_ioreq_server_state(xen_xc, xen_domid,
state->ioservid,
(rstate == RUN_STATE_RUNNING));
}
static void xen_exit_notifier(Notifier *n, void *data)
@@ -1088,8 +1181,9 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
MemoryRegion **ram_memory)
{
int i, rc;
unsigned long ioreq_pfn;
unsigned long bufioreq_evtchn;
xen_pfn_t ioreq_pfn;
xen_pfn_t bufioreq_pfn;
evtchn_port_t bufioreq_evtchn;
XenIOState *state;
state = g_malloc0(sizeof (XenIOState));
@@ -1106,6 +1200,12 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
return -1;
}
rc = xen_create_ioreq_server(xen_xc, xen_domid, &state->ioservid);
if (rc < 0) {
perror("xen: ioreq server create");
return -1;
}
state->exit.notify = xen_exit_notifier;
qemu_add_exit_notifier(&state->exit);
@@ -1115,8 +1215,18 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
state->wakeup.notify = xen_wakeup_notifier;
qemu_register_wakeup_notifier(&state->wakeup);
xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_IOREQ_PFN, &ioreq_pfn);
rc = xen_get_ioreq_server_info(xen_xc, xen_domid, state->ioservid,
&ioreq_pfn, &bufioreq_pfn,
&bufioreq_evtchn);
if (rc < 0) {
hw_error("failed to get ioreq server info: error %d handle=" XC_INTERFACE_FMT,
errno, xen_xc);
}
DPRINTF("shared page at pfn %lx\n", ioreq_pfn);
DPRINTF("buffered io page at pfn %lx\n", bufioreq_pfn);
DPRINTF("buffered io evtchn is %x\n", bufioreq_evtchn);
state->shared_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
PROT_READ|PROT_WRITE, ioreq_pfn);
if (state->shared_page == NULL) {
@@ -1138,10 +1248,10 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
hw_error("get vmport regs pfn returned error %d, rc=%d", errno, rc);
}
xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_PFN, &ioreq_pfn);
DPRINTF("buffered io page at pfn %lx\n", ioreq_pfn);
state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid, XC_PAGE_SIZE,
PROT_READ|PROT_WRITE, ioreq_pfn);
state->buffered_io_page = xc_map_foreign_range(xen_xc, xen_domid,
XC_PAGE_SIZE,
PROT_READ|PROT_WRITE,
bufioreq_pfn);
if (state->buffered_io_page == NULL) {
hw_error("map buffered IO page returned error %d", errno);
}
@@ -1149,6 +1259,12 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
/* Note: cpus is empty at this point in init */
state->cpu_by_vcpu_id = g_malloc0(max_cpus * sizeof(CPUState *));
rc = xen_set_ioreq_server_state(xen_xc, xen_domid, state->ioservid, true);
if (rc < 0) {
hw_error("failed to enable ioreq server info: error %d handle=" XC_INTERFACE_FMT,
errno, xen_xc);
}
state->ioreq_local_port = g_malloc0(max_cpus * sizeof (evtchn_port_t));
/* FIXME: how about if we overflow the page here? */
@@ -1156,22 +1272,16 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
xen_vcpu_eport(state->shared_page, i));
if (rc == -1) {
fprintf(stderr, "bind interdomain ioctl error %d\n", errno);
fprintf(stderr, "shared evtchn %d bind error %d\n", i, errno);
return -1;
}
state->ioreq_local_port[i] = rc;
}
rc = xc_get_hvm_param(xen_xc, xen_domid, HVM_PARAM_BUFIOREQ_EVTCHN,
&bufioreq_evtchn);
if (rc < 0) {
fprintf(stderr, "failed to get HVM_PARAM_BUFIOREQ_EVTCHN\n");
return -1;
}
rc = xc_evtchn_bind_interdomain(state->xce_handle, xen_domid,
(uint32_t)bufioreq_evtchn);
bufioreq_evtchn);
if (rc == -1) {
fprintf(stderr, "bind interdomain ioctl error %d\n", errno);
fprintf(stderr, "buffered evtchn bind error %d\n", errno);
return -1;
}
state->bufioreq_local_port = rc;
@@ -1187,6 +1297,12 @@ int xen_hvm_init(ram_addr_t *below_4g_mem_size, ram_addr_t *above_4g_mem_size,
memory_listener_register(&state->memory_listener, &address_space_memory);
state->log_for_dirtybit = NULL;
state->io_listener = xen_io_listener;
memory_listener_register(&state->io_listener, &address_space_io);
state->device_listener = xen_device_listener;
device_listener_register(&state->device_listener);
/* Initialize backend core & drivers */
if (xen_be_init() != 0) {
fprintf(stderr, "%s: xen backend core setup failed\n", __FUNCTION__);

View File

@@ -49,9 +49,6 @@
*/
#define NON_MCACHE_MEMORY_SIZE (80 * 1024 * 1024)
#define mapcache_lock() ((void)0)
#define mapcache_unlock() ((void)0)
typedef struct MapCacheEntry {
hwaddr paddr_index;
uint8_t *vaddr_base;
@@ -79,11 +76,22 @@ typedef struct MapCache {
unsigned int mcache_bucket_shift;
phys_offset_to_gaddr_t phys_offset_to_gaddr;
QemuMutex lock;
void *opaque;
} MapCache;
static MapCache *mapcache;
static inline void mapcache_lock(void)
{
qemu_mutex_lock(&mapcache->lock);
}
static inline void mapcache_unlock(void)
{
qemu_mutex_unlock(&mapcache->lock);
}
static inline int test_bits(int nr, int size, const unsigned long *addr)
{
unsigned long res = find_next_zero_bit(addr, size + nr, nr);
@@ -102,6 +110,7 @@ void xen_map_cache_init(phys_offset_to_gaddr_t f, void *opaque)
mapcache->phys_offset_to_gaddr = f;
mapcache->opaque = opaque;
qemu_mutex_init(&mapcache->lock);
QTAILQ_INIT(&mapcache->locked_entries);
@@ -193,14 +202,14 @@ static void xen_remap_bucket(MapCacheEntry *entry,
g_free(err);
}
uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
uint8_t lock)
static uint8_t *xen_map_cache_unlocked(hwaddr phys_addr, hwaddr size,
uint8_t lock)
{
MapCacheEntry *entry, *pentry = NULL;
hwaddr address_index;
hwaddr address_offset;
hwaddr __size = size;
hwaddr __test_bit_size = size;
hwaddr cache_size = size;
hwaddr test_bit_size;
bool translated = false;
tryagain:
@@ -209,22 +218,22 @@ tryagain:
trace_xen_map_cache(phys_addr);
/* __test_bit_size is always a multiple of XC_PAGE_SIZE */
/* test_bit_size is always a multiple of XC_PAGE_SIZE */
if (size) {
__test_bit_size = size + (phys_addr & (XC_PAGE_SIZE - 1));
test_bit_size = size + (phys_addr & (XC_PAGE_SIZE - 1));
if (__test_bit_size % XC_PAGE_SIZE) {
__test_bit_size += XC_PAGE_SIZE - (__test_bit_size % XC_PAGE_SIZE);
if (test_bit_size % XC_PAGE_SIZE) {
test_bit_size += XC_PAGE_SIZE - (test_bit_size % XC_PAGE_SIZE);
}
} else {
__test_bit_size = XC_PAGE_SIZE;
test_bit_size = XC_PAGE_SIZE;
}
if (mapcache->last_entry != NULL &&
mapcache->last_entry->paddr_index == address_index &&
!lock && !__size &&
!lock && !size &&
test_bits(address_offset >> XC_PAGE_SHIFT,
__test_bit_size >> XC_PAGE_SHIFT,
test_bit_size >> XC_PAGE_SHIFT,
mapcache->last_entry->valid_mapping)) {
trace_xen_map_cache_return(mapcache->last_entry->vaddr_base + address_offset);
return mapcache->last_entry->vaddr_base + address_offset;
@@ -232,20 +241,20 @@ tryagain:
/* size is always a multiple of MCACHE_BUCKET_SIZE */
if (size) {
__size = size + address_offset;
if (__size % MCACHE_BUCKET_SIZE) {
__size += MCACHE_BUCKET_SIZE - (__size % MCACHE_BUCKET_SIZE);
cache_size = size + address_offset;
if (cache_size % MCACHE_BUCKET_SIZE) {
cache_size += MCACHE_BUCKET_SIZE - (cache_size % MCACHE_BUCKET_SIZE);
}
} else {
__size = MCACHE_BUCKET_SIZE;
cache_size = MCACHE_BUCKET_SIZE;
}
entry = &mapcache->entry[address_index % mapcache->nr_buckets];
while (entry && entry->lock && entry->vaddr_base &&
(entry->paddr_index != address_index || entry->size != __size ||
(entry->paddr_index != address_index || entry->size != cache_size ||
!test_bits(address_offset >> XC_PAGE_SHIFT,
__test_bit_size >> XC_PAGE_SHIFT,
test_bit_size >> XC_PAGE_SHIFT,
entry->valid_mapping))) {
pentry = entry;
entry = entry->next;
@@ -253,19 +262,19 @@ tryagain:
if (!entry) {
entry = g_malloc0(sizeof (MapCacheEntry));
pentry->next = entry;
xen_remap_bucket(entry, __size, address_index);
xen_remap_bucket(entry, cache_size, address_index);
} else if (!entry->lock) {
if (!entry->vaddr_base || entry->paddr_index != address_index ||
entry->size != __size ||
entry->size != cache_size ||
!test_bits(address_offset >> XC_PAGE_SHIFT,
__test_bit_size >> XC_PAGE_SHIFT,
test_bit_size >> XC_PAGE_SHIFT,
entry->valid_mapping)) {
xen_remap_bucket(entry, __size, address_index);
xen_remap_bucket(entry, cache_size, address_index);
}
}
if(!test_bits(address_offset >> XC_PAGE_SHIFT,
__test_bit_size >> XC_PAGE_SHIFT,
test_bit_size >> XC_PAGE_SHIFT,
entry->valid_mapping)) {
mapcache->last_entry = NULL;
if (!translated && mapcache->phys_offset_to_gaddr) {
@@ -291,14 +300,27 @@ tryagain:
return mapcache->last_entry->vaddr_base + address_offset;
}
uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
uint8_t lock)
{
uint8_t *p;
mapcache_lock();
p = xen_map_cache_unlocked(phys_addr, size, lock);
mapcache_unlock();
return p;
}
ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
{
MapCacheEntry *entry = NULL;
MapCacheRev *reventry;
hwaddr paddr_index;
hwaddr size;
ram_addr_t raddr;
int found = 0;
mapcache_lock();
QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
if (reventry->vaddr_req == ptr) {
paddr_index = reventry->paddr_index;
@@ -323,13 +345,16 @@ ram_addr_t xen_ram_addr_from_mapcache(void *ptr)
}
if (!entry) {
DPRINTF("Trying to find address %p that is not in the mapcache!\n", ptr);
return 0;
raddr = 0;
} else {
raddr = (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
((unsigned long) ptr - (unsigned long) entry->vaddr_base);
}
return (reventry->paddr_index << MCACHE_BUCKET_SHIFT) +
((unsigned long) ptr - (unsigned long) entry->vaddr_base);
mapcache_unlock();
return raddr;
}
void xen_invalidate_map_cache_entry(uint8_t *buffer)
static void xen_invalidate_map_cache_entry_unlocked(uint8_t *buffer)
{
MapCacheEntry *entry = NULL, *pentry = NULL;
MapCacheRev *reventry;
@@ -383,6 +408,13 @@ void xen_invalidate_map_cache_entry(uint8_t *buffer)
g_free(entry);
}
void xen_invalidate_map_cache_entry(uint8_t *buffer)
{
mapcache_lock();
xen_invalidate_map_cache_entry_unlocked(buffer);
mapcache_unlock();
}
void xen_invalidate_map_cache(void)
{
unsigned long i;
@@ -391,14 +423,14 @@ void xen_invalidate_map_cache(void)
/* Flush pending AIO before destroying the mapcache */
bdrv_drain_all();
mapcache_lock();
QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
DPRINTF("There should be no locked mappings at this time, "
"but "TARGET_FMT_plx" -> %p is present\n",
reventry->paddr_index, reventry->vaddr_req);
}
mapcache_lock();
for (i = 0; i < mapcache->nr_buckets; i++) {
MapCacheEntry *entry = &mapcache->entry[i];