Compare commits

...

49 Commits

Author SHA1 Message Date
Peter Maydell
851627352c Update version for v2.0.0-rc3 release
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 17:45:11 +01:00
Michael Tokarev
50212d6346 Revert "fix return check for KVM_GET_DIRTY_LOG ioctl"
This reverts commit b533f658a9.

The original code was wrong, because effectively it ignored errors
from kernel, because kernel does not return -1 on error case but
returns -errno, and does not return -EPERM for this particular ioctl.
But in some cases kernel actually returned unsuccessful result,
namely, when the dirty bitmap in requested slot does not exist
it returns -ENOENT.  With new code this condition becomes an
error when it shouldn't be.

Revert that patch instead of fixing it properly this late in the
release process.  I disagree with this approach, but let's make
things move _somewhere_, instead of arguing endlessly whch of
the 2 proposed fixes is better.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Message-id: 1397477644-902-1-git-send-email-mjt@msgid.tls.msk.ru
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 15:40:02 +01:00
Peter Maydell
c2b9af1d6c Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
acpi: SSDT update

This has a fix by Igor for a regression introduced by
bridge hotplug code.
Expected test files were updated accordingly.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Mon 14 Apr 2014 13:13:35 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"

* remotes/mst/tags/for_upstream:
  acpi-test: update expected files
  acpi: fix incorrect encoding for 0x{F-1}FFFF

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 14:02:12 +01:00
Benoît Canet
940973ae0b ide: Correct improper smart self test counter reset in ide core.
The SMART self test counter was incorrectly being reset to zero,
not 1. This had the effect that on every 21st SMART EXECUTE OFFLINE:
 * We would write off the beginning of a dynamically allocated buffer
 * We forgot the SMART history
Fix this.

Signed-off-by: Benoit Canet <benoit@irqsave.net>
Message-id: 1397336390-24664-1-git-send-email-benoit.canet@irqsave.net
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Cc: qemu-stable@nongnu.org
Acked-by: Kevin Wolf <kwolf@redhat.com>
[PMM: tweaked commit message as per suggestions from Markus]
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 13:23:53 +01:00
Michael S. Tsirkin
8611224a7b acpi-test: update expected files
commit 58b035c7354afc0c5351ea62264c01d74196ec26
    acpi: fix incorrect encoding for 0x{F-1}FFFF
changes the SSDT, update expected files accordingly.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-04-14 15:13:27 +03:00
Igor Mammedov
482f38b948 acpi: fix incorrect encoding for 0x{F-1}FFFF
Fix typo in build_append_int() which causes integer
truncation when it's in range 0x{F-1}FFFF by packing it
as WordConst instead of required DWordConst.

In partucular this fixes a regression: hotplug in slots 16,17,18 and 19
didn't work, since SSDT had code like this:

                If (And (Arg0, 0x0000))
                {
                    Notify (S80, Arg1)
                }

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Stefan Weil <sw@weilnetz.de>
2014-04-14 15:13:27 +03:00
Peter Maydell
590e5dd98f configure: Make stack-protector test check both compile and link
Since we use the -fstack-protector argument at both compile and
link time in the build, we must check that it works with both
a compile and a link:
 * MacOSX only fails in the compile step, not linking
 * some gcc cross environments only fail at the link stage (if they
   require a libssp and it's not present for some reason)

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1397232832-32301-1-git-send-email-peter.maydell@linaro.org
Tested-by: Alexey Kardashevskiy <aik@ozlabs.ru>
2014-04-14 12:11:18 +01:00
Dmitry Fleytman
f12d048a52 vmxnet3: validate queues configuration read on migration
CVE-2013-4544

Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1396604722-11902-5-git-send-email-dmitry@daynix.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 11:50:56 +01:00
Dmitry Fleytman
3c99afc779 vmxnet3: validate interrupt indices read on migration
CVE-2013-4544

Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1396604722-11902-4-git-send-email-dmitry@daynix.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 11:50:49 +01:00
Dmitry Fleytman
9878d173f5 vmxnet3: validate queues configuration coming from guest
CVE-2013-4544

Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1396604722-11902-3-git-send-email-dmitry@daynix.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 11:50:22 +01:00
Dmitry Fleytman
8c6c047899 vmxnet3: validate interrupt indices coming from guest
CVE-2013-4544

Signed-off-by: Dmitry Fleytman <dmitry@daynix.com>
Reported-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-id: 1396604722-11902-2-git-send-email-dmitry@daynix.com
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-14 11:33:18 +01:00
Cole Robinson
92b3eeadd9 qom: Fix crash with qom-list and link properties
Commit 9561fda8d9 changed the type of
'opaque' for link properties, but missed updating this call site.
Reproducer:

./x86_64-softmmu/qemu-system-x86_64 -qmp unix:./qmp.sock,server &
./scripts/qmp/qmp-shell ./qmp.sock
(QEMU) qom-list path=//machine/i440fx/pci.0/child[2]

Reported-by: Marcin Gibuła <m.gibula@beyond.pl>
Signed-off-by: Cole Robinson <crobinso@redhat.com>
Message-id: 2f8f007ce2152ac3b65f0811199662799c509225.1397155389.git.crobinso@redhat.com
Acked-by: Andreas Färber <afaerber@suse.de>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-11 17:57:36 +01:00
Michael S. Tsirkin
edc2438512 virtio-net: fix guest-triggerable buffer overrun
When VM guest programs multicast addresses for
a virtio net card, it supplies a 32 bit
entries counter for the number of addresses.
These addresses are read into tail portion of
a fixed macs array which has size MAC_TABLE_ENTRIES,
at offset equal to in_use.

To avoid overflow of this array by guest, qemu attempts
to test the size as follows:
-    if (in_use + mac_data.entries <= MAC_TABLE_ENTRIES) {

however, as mac_data.entries is uint32_t, this sum
can overflow, e.g. if in_use is 1 and mac_data.entries
is 0xffffffff then in_use + mac_data.entries will be 0.

Qemu will then read guest supplied buffer into this
memory, overflowing buffer on heap.

CVE-2014-0150

Cc: qemu-stable@nongnu.org
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1397218574-25058-1-git-send-email-mst@redhat.com
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-11 16:02:23 +01:00
Peter Maydell
21e2db7260 Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into staging
Block patches for 2.0.0-rc3

# gpg: Signature made Fri 11 Apr 2014 13:37:34 BST using RSA key ID C88F2FD6
# gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>"

* remotes/kevin/tags/for-upstream:
  block-commit: speed is an optional parameter
  iscsi: Remember to set ret for iscsi_open in error case
  bochs: Fix catalog size check
  bochs: Fix memory leak in bochs_open() error path

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-11 14:07:24 +01:00
Peter Maydell
80fc7b1755 Merge remote-tracking branch 'remotes/kraxel/tags/pull-sdl-1' into staging
sdl2 relative mouse mode fixes.

# gpg: Signature made Fri 11 Apr 2014 11:36:46 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>"

* remotes/kraxel/tags/pull-sdl-1:
  input: sdl2: Fix relative mode to match SDL1 behavior
  input: sdl2: Fix guest_cursor logic

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-11 13:51:16 +01:00
Max Reitz
5450466394 block-commit: speed is an optional parameter
As speed is an optional parameter for the QMP block-commit command, it
should be set to 0 if not given (as it is undefined if has_speed is
false), that is, the speed should not be limited.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-11 13:59:49 +02:00
Fam Zheng
cd82b6fb4d iscsi: Remember to set ret for iscsi_open in error case
Signed-off-by: Fam Zheng <famz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2014-04-11 13:59:49 +02:00
Kevin Wolf
715c3f60ef bochs: Fix catalog size check
The old check was off by a factor of 512 and didn't consider cases where
we don't get an exact division. This could lead to an out-of-bounds
array access in seek_to_sector().

Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2014-04-11 13:59:49 +02:00
Kevin Wolf
28ec11bc88 bochs: Fix memory leak in bochs_open() error path
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Laszlo Ersek <lersek@redhat.com>
2014-04-11 13:59:49 +02:00
Cole Robinson
2d968ffbae input: sdl2: Fix relative mode to match SDL1 behavior
Right now relative mode accelerates too fast, and has the 'invisible wall'
problem. SDL2 added an explicit API to handle this use case, so let's use
it.

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2014-04-11 12:19:16 +02:00
Cole Robinson
afbc0dd649 input: sdl2: Fix guest_cursor logic
Unbreaks relative mouse mode with sdl2, just like was done with sdl.c
in c3aa84b6.

Signed-off-by: Cole Robinson <crobinso@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2014-04-11 12:19:16 +02:00
Peter Maydell
f516a5cc05 Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
acpi: DSDT update

Two fixes here:
- Test fix to avoid warning with make check.
- Hex file update so people building QEMU
  without installing iasl get exactly the same ACPI
  as with.

Both should help avoid user confusion.

As it's very easy to check that the produced ACPI
binary didn't change, I think these are very low risk.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Thu 10 Apr 2014 17:09:43 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"

* remotes/mst/tags/for_upstream:
  acpi: update generated hex files
  tests/acpi: update expected DSDT files

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-10 23:07:56 +01:00
Peter Maydell
0a9077ea14 configure: use do_cc when checking for -fstack-protector support
MacOSX clang silently swallows unrecognized -f options when doing a link
with '-framework' also on the command line, so to detect support for
the various -fstack-protector options we must do a plain .c to .o compile,
not a complete compile-and-link.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Message-id: 1397041487-28477-1-git-send-email-peter.maydell@linaro.org
2014-04-10 22:17:47 +01:00
Michael S. Tsirkin
775478418a acpi: update generated hex files
commit f2ccc311df
    dsdt: tweak ACPI ID for hotplug resource device
changes the DSDT, update hex files to match

Otherwise the fix is only effective if QEMU is built
with iasl.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2014-04-10 19:03:18 +03:00
Michael S. Tsirkin
50329d3418 tests/acpi: update expected DSDT files
commit f2ccc311df
    dsdt: tweak ACPI ID for hotplug resource device
changes the DSDT, update test expected files to match

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reported-by: Igor Mammedov <imammedo@redhat.com>
2014-04-09 17:52:08 +03:00
Peter Maydell
efcc87d9ae Update version for v2.0.0-rc2 release
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-08 18:52:06 +01:00
Peter Maydell
7dc176bce4 hw/pci-host/prep: Don't reverse IO accesses on bigendian hosts
The raven_io_read() and raven_io_write() functions pass and
return values in little-endian format (since the IO op struct
is marked DEVICE_LITTLE_ENDIAN); however they were storing the
values in the buffer to pass to address_space_read/write()
in host-endian order, which meant that on big-endian hosts
the values were inadvertently reversed. Use the *_le_p()
accessors instead so that we are consistent regardless of
host endianness.

Strictly speaking the byte order of the buffer for
address_space_rw() is target byte order (which for PPC
will be BE) but it doesn't actually matter as long as we
are consistent about the marking on the IO op struct and
which stl_*_p().

This bug was probably introduced due to confusion caused by
the two different versions of ldl_p() and friends:
 bswap.h defines versions meaning "host endianness access"
 cpu-all.h defines versions meaning "target endianness access"
As a target-independent source file prep.c gets the bswap.h
versions; the very similar looking code in ioport.c is
compiled per-target and gets the cpu-all.h versions.

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 1396972271-22660-1-git-send-email-peter.maydell@linaro.org
Reviewed-by: Richard Henderson <rth@twiddle.net>
2014-04-08 18:37:45 +01:00
Peter Maydell
9bc1a1d817 Merge remote-tracking branch 'remotes/mst/tags/for_upstream' into staging
acpi bug fix

Here is a single last minute fix for 2.0

This changes the HID of the container used to claim
resources for CPU hotplug.
As a result, windows XP SP3 no longer brings up
an annoying "found new hardware" wizard on boot.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

# gpg: Signature made Tue 08 Apr 2014 13:23:30 BST using RSA key ID D28D5469
# gpg: Good signature from "Michael S. Tsirkin <mst@kernel.org>"
# gpg:                 aka "Michael S. Tsirkin <mst@redhat.com>"

* remotes/mst/tags/for_upstream:
  dsdt: tweak ACPI ID for hotplug resource device

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-08 13:59:28 +01:00
Michael S. Tsirkin
f2ccc311df dsdt: tweak ACPI ID for hotplug resource device
ACPI0004 seems too new:
Windows XP complains about an unrecognized device.
This is a regression since 1.7.
Use PNP0A06 instead - Generic Container Device.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Reviewed-By: Igor Mammedov <imammedo@redhat.com>
2014-04-08 15:22:59 +03:00
Peter Maydell
093de72b9c Merge remote-tracking branch 'remotes/kraxel/tags/pull-gtk-5' into staging
gtk: Implement grab-on-click behavior in relative mode

# gpg: Signature made Tue 08 Apr 2014 12:58:49 BST using RSA key ID D3E87138
# gpg: Good signature from "Gerd Hoffmann (work) <kraxel@redhat.com>"
# gpg:                 aka "Gerd Hoffmann <gerd@kraxel.org>"
# gpg:                 aka "Gerd Hoffmann (private) <kraxel@gmail.com>"

* remotes/kraxel/tags/pull-gtk-5:
  gtk: Implement grab-on-click behavior in relative mode

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-08 13:05:25 +01:00
Peter Maydell
9a4fb6aa19 Merge remote-tracking branch 'remotes/agraf/tags/signed-ppc-for-upstream' into staging
Patch queue for ppc - 2014-04-08

This is the final queue for 2.0! It fixes a lot of bugs people have
seen during testing:

  - Fix e500 SMP
  - Fix book3s_64 DEC
  - Fix VSX (new feature in 2.0) for LE hosts
  - Fix PR KVM on top of pHyp (SLOF update)

# gpg: Signature made Tue 08 Apr 2014 10:24:18 BST using RSA key ID 03FEDC60
# gpg: Can't check signature: public key not found

* remotes/agraf/tags/signed-ppc-for-upstream:
  PPC: Add l1 cache sizes for 970 and above systems
  ppce500_spin: Initialize struct properly
  PPC: Only enter MSR_POW when no interrupts pending
  PPC: Clean up DECR implementation
  target-ppc: Correct VSX Integer to FP Conversion
  target-ppc: Correct VSX FP to Integer Conversion
  target-ppc: Correct VSX FP to FP Conversions
  target-ppc: Correct VSX Scalar Compares
  target-ppc: Correct Simple VSR LE Host Inversions
  target-ppc: Correct LE Host Inversion of Lower VSRs
  target-ppc: Define Endian-Correct Accessors for VSR Field Access
  target-ppc: Bug: VSX Convert to Integer Should Truncate
  softfloat: Introduce float32_to_uint64_round_to_zero
  pseries: Update SLOF firmware image to qemu-slof-20140404
  PPC: E500: Set PIR default reset value rather than SPR value

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-08 10:58:31 +01:00
Peter Maydell
e792933ce1 Merge remote-tracking branch 'remotes/mdroth/qga-pull-2014-4-7' into staging
* remotes/mdroth/qga-pull-2014-4-7:
  vss-win32: Fix build with mingw64-headers-3.1.0
  Makefile: add qga-vss-dll-obj-y to nested variables

Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2014-04-08 10:41:30 +01:00
Alexander Graf
06f6e12491 PPC: Add l1 cache sizes for 970 and above systems
Book3s_64 guests expect the L1 cache size in device tree, so let's give
them proper values for all CPU types we support.

This fixes a "not compliant" warning with sles11 guests on -M pseries for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:06 +02:00
Alexander Graf
6a2b3d89fa ppce500_spin: Initialize struct properly
The spinning struct is in guest endianness, so we need to initialize
its variables in guest endianness too.

This fixes booting e500 guests with SMP on x86 for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:05 +02:00
Alexander Graf
05edc26c61 PPC: Only enter MSR_POW when no interrupts pending
We were entering the power saving state even when interrupts (like an
external interrupt or a decrementer interrupt) were still in flight.

In case we find a pending interrupt, don't enter power saving state.

Signed-off-by: Alexander Graf <agraf@suse.de>
Reviewed-by: Tom Musta <tmusta@gmail.com>
2014-04-08 11:20:05 +02:00
Alexander Graf
e81a982aa5 PPC: Clean up DECR implementation
There are 3 different variants of the decrementor for BookE and BookS.

The BookE variant sets TSR[DIS] to 1 when the DEC value becomes 1 or 0. TSR[DIS]
is then the indicator whether the decrementor interrupt line is asserted or not.

The old BookS variant treats DEC as an edge interrupt that gets triggered when
the DEC value's top bit turns 1 from 0.

The new BookS variant maintains the assertion bit inside DEC itself. Whenever
the DEC value becomes negative (top bit set) the DEC interrupt line is asserted.

So far we implemented mostly the old BookS variant. Let's do them all properly.

This fixes booting pseries ppc64 guest images in TCG mode for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:04 +02:00
Tom Musta
6cd7db3d92 target-ppc: Correct VSX Integer to FP Conversion
This patch corrects the VSX integer to floating point conversion instructions
by using the endian correct accessors.  The auxiliary "j" index used by the
existing macros is now obsolete and is removed.  The JOFFSET preprocessor
macro is also obsolete and removed.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:04 +02:00
Tom Musta
d1dec5ef55 target-ppc: Correct VSX FP to Integer Conversion
This patch corrects the VSX floating point to integer conversion
instructions by using the endian correct accessors.  The auxiliary
"j" index used by the existing macros is now obsolete and is removed.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:03 +02:00
Tom Musta
6bbad7a91e target-ppc: Correct VSX FP to FP Conversions
This change corrects the VSX double precision to single precision and
single precision to double precisions conversion routines.  The endian
correct accessors are now used.  The auxiliary "j" index is no longer
necessary and is eliminated.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:03 +02:00
Tom Musta
50fc89e7b1 target-ppc: Correct VSX Scalar Compares
This change fixes the VSX scalar compare instructions.  The existing usage of "x.f64[0]"
is changed to "x.VsrD(0)".

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:03 +02:00
Tom Musta
bcb7652e8d target-ppc: Correct Simple VSR LE Host Inversions
A common pattern in the VSX helper code macros is the use of "x.fld[i]" where
"x" is a VSR and "fld" is an argument to a macro ("f64" or "f32" is passed).
This is not always correct on LE hosts.

This change addresses all instances of this pattern to be "x.fld" where "fld" is:

  - "VsrD(0)" for scalar instructions accessing 64-bit numbers
  - "VsrD(i)" for vector instructions accessing 64-bit numbers
  - "VsrW(i)" for vector instructions accessing 32-bit numbers

Note that there are no instances of this pattern where a scalar instruction
accesses a 32-bit number.

Note also that it would be correct to use "VsrD(i)" for scalar instructions since
the loop index is only ever "0".  I have choosen to use "VsrD(0)" instead ... it
seems a little clearer.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:02 +02:00
Tom Musta
d359db00e6 target-ppc: Correct LE Host Inversion of Lower VSRs
This change properly orders the doublewords of the VSRs 0-31.  Because these
registers are constructed from separate doublewords, they must be inverted
on Little Endian hosts.  The inversion is performed both when the VSR is read
and when it is written.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:02 +02:00
Tom Musta
80189035de target-ppc: Define Endian-Correct Accessors for VSR Field Access
This change defines accessors for VSR doubleword and word fields that
are correct from a host Endian perspective.  This allows code to
use the Power ISA indexing numbers in code.

For example, the xscvdpsxws instruction has a target VSR that looks
like this:

  0           32       64                    127
  +-----------+--------+-----------+-----------+
  | undefined | SW     | undefined | undefined |
  +-----------+--------+-----------+-----------+

VSX helper code will use VsrW(1) to access this field.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:01 +02:00
Tom Musta
0453099b7d target-ppc: Bug: VSX Convert to Integer Should Truncate
The various VSX Convert to Integer instructions should truncate the
floating point number to an integer value, which is equivalent to
a round-to-zero rounding mode.  The existing VSX floating point to
integer conversion helpers are erroneously using the rounding mode set
int the PowerPC Floating Point Status and Control Register (FPSCR).
This change corrects this defect by using the appropriate
float*_to_*_round_to_zero() routines fro the softfloat library.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:01 +02:00
Tom Musta
a13d448968 softfloat: Introduce float32_to_uint64_round_to_zero
This change adds the float32_to_uint64_round_to_zero function to the softfloat
library.  This function fills out the complement of float32 to INT round-to-zero
conversion rountines, where INT is {int32_t, uint32_t, int64_t, uint64_t}.

This contribution can be licensed under either the softfloat-2a or -2b
license.

Signed-off-by: Tom Musta <tommusta@gmail.com>
Tested-by: Tom Musta <tommusta@gmail.com>
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:00 +02:00
Alexey Kardashevskiy
3636226ae4 pseries: Update SLOF firmware image to qemu-slof-20140404
The change log is:
  > Isolate sc 1 detection logic
  > build: auto-detect ppc64 architecture
  > cas: increase hcall buffer size to accomodate 256 cpus
  > usb: change device tree naming
  > usb-core: adjust port numbers in set_address
  > virtio-scsi: correct srplun comment
  > Fix kernel loading
  > Workaround to make grub2 assign server ip from dhcp ack packet only
  > ELF: Enter LE binary in LE mode
  > ELF loading should fail for virt != phys

Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Alexander Graf <agraf@suse.de>
2014-04-08 11:20:00 +02:00
Alexander Graf
6a450df9b8 PPC: E500: Set PIR default reset value rather than SPR value
We now reset SPRs to their reset values on CPU reset. So if we want
to have an SPR persistently changed, we need to change its default
reset value rather than the value itself manually.

Do this for SPR_BOOKE_PIR, fixing e500v2 SMP boot.

Reported-by: Frederic Konrad <fred.konrad@greensocs.com>
Signed-off-by: Alexander Graf <agraf@suse.de>
Tested-by: KONRAD Frederic <fred.konrad@greensocs.com>
2014-04-08 11:19:59 +02:00
Tomoki Sekiyama
9854202b57 vss-win32: Fix build with mingw64-headers-3.1.0
In mingw64-headers-3.1.0, definition of _com_issue_error() is added, which
conflicts with definition in install.cpp. This adds version checking for
mingw headers to disable the definition when the headers>=3.1 is used.

Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-04-07 14:39:19 -05:00
Tomoki Sekiyama
577a67234d Makefile: add qga-vss-dll-obj-y to nested variables
The build rule for qga/vss-win32/qga-vss.dll is broken by commit
ba1183da9a, because it misses
qga-vss-dll-obj-y in the list of nested variables.
This fixes build of qga-vss.dll by adding qga-vss-dll-obj-y to the list.

Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama@hds.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2014-04-07 14:39:19 -05:00
37 changed files with 485 additions and 347 deletions

View File

@@ -133,6 +133,7 @@ dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
common-obj-y \

View File

@@ -1 +1 @@
1.7.91
1.7.93

View File

@@ -148,16 +148,26 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
s->extent_blocks = 1 + (le32_to_cpu(bochs.extent) - 1) / 512;
s->extent_size = le32_to_cpu(bochs.extent);
if (s->extent_size == 0) {
error_setg(errp, "Extent size may not be zero");
return -EINVAL;
if (s->extent_size < BDRV_SECTOR_SIZE) {
/* bximage actually never creates extents smaller than 4k */
error_setg(errp, "Extent size must be at least 512");
ret = -EINVAL;
goto fail;
} else if (!is_power_of_2(s->extent_size)) {
error_setg(errp, "Extent size %" PRIu32 " is not a power of two",
s->extent_size);
ret = -EINVAL;
goto fail;
} else if (s->extent_size > 0x800000) {
error_setg(errp, "Extent size %" PRIu32 " is too large",
s->extent_size);
return -EINVAL;
ret = -EINVAL;
goto fail;
}
if (s->catalog_size < bs->total_sectors / s->extent_size) {
if (s->catalog_size < DIV_ROUND_UP(bs->total_sectors,
s->extent_size / BDRV_SECTOR_SIZE))
{
error_setg(errp, "Catalog size is too small for this disk size");
ret = -EINVAL;
goto fail;

View File

@@ -1233,6 +1233,7 @@ static int iscsi_open(BlockDriverState *bs, QDict *options, int flags,
iscsi_readcapacity_sync(iscsilun, &local_err);
if (local_err != NULL) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
}
bs->total_sectors = sector_lun2qemu(iscsilun->num_blocks, iscsilun);

View File

@@ -1876,6 +1876,10 @@ void qmp_block_commit(const char *device,
*/
BlockdevOnError on_error = BLOCKDEV_ON_ERROR_REPORT;
if (!has_speed) {
speed = 0;
}
/* drain all i/o before commits */
bdrv_drain_all();

5
configure vendored
View File

@@ -1448,7 +1448,10 @@ done
if test "$stack_protector" != "no" ; then
gcc_flags="-fstack-protector-strong -fstack-protector-all"
for flag in $gcc_flags; do
if compile_prog "-Werror $flag" "" ; then
# We need to check both a compile and a link, since some compiler
# setups fail only on a .c->.o compile and some only at link time
if do_cc $QEMU_CFLAGS -Werror $flag -c -o $TMPO $TMPC &&
compile_prog "-Werror $flag" ""; then
QEMU_CFLAGS="$QEMU_CFLAGS $flag"
LIBTOOLFLAGS="$LIBTOOLFLAGS -Wc,$flag"
break

View File

@@ -1626,6 +1626,26 @@ uint64 float32_to_uint64(float32 a STATUS_PARAM)
return roundAndPackUint64(aSign, aSig64, aSigExtra STATUS_VAR);
}
/*----------------------------------------------------------------------------
| Returns the result of converting the single-precision floating-point value
| `a' to the 64-bit unsigned integer format. The conversion is
| performed according to the IEC/IEEE Standard for Binary Floating-Point
| Arithmetic, except that the conversion is always rounded toward zero. If
| `a' is a NaN, the largest unsigned integer is returned. Otherwise, if the
| conversion overflows, the largest unsigned integer is returned. If the
| 'a' is negative, the result is rounded and zero is returned; values that do
| not round to zero will raise the inexact flag.
*----------------------------------------------------------------------------*/
uint64 float32_to_uint64_round_to_zero(float32 a STATUS_PARAM)
{
signed char current_rounding_mode = STATUS(float_rounding_mode);
set_float_rounding_mode(float_round_to_zero STATUS_VAR);
int64_t v = float32_to_uint64(a STATUS_VAR);
set_float_rounding_mode(current_rounding_mode STATUS_VAR);
return v;
}
/*----------------------------------------------------------------------------
| Returns the result of converting the single-precision floating-point value
| `a' to the 64-bit two's complement integer format. The conversion is

View File

@@ -391,7 +391,7 @@ static void build_append_int(GArray *table, uint32_t value)
build_append_byte(table, 0x01); /* OneOp */
} else if (value <= 0xFF) {
build_append_value(table, value, 1);
} else if (value <= 0xFFFFF) {
} else if (value <= 0xFFFF) {
build_append_value(table, value, 2);
} else {
build_append_value(table, value, 4);

View File

@@ -93,7 +93,7 @@ Scope(\_SB) {
}
Device(CPU_HOTPLUG_RESOURCE_DEVICE) {
Name(_HID, "ACPI0004")
Name(_HID, EisaId("PNP0A06"))
Name(_CRS, ResourceTemplate() {
IO(Decode16, CPU_STATUS_BASE, CPU_STATUS_BASE, 0, CPU_STATUS_LEN)

View File

@@ -3,12 +3,12 @@ static unsigned char AcpiDsdtAmlCode[] = {
0x53,
0x44,
0x54,
0x85,
0x80,
0x11,
0x0,
0x0,
0x1,
0x8b,
0x60,
0x42,
0x58,
0x50,
@@ -31,8 +31,8 @@ static unsigned char AcpiDsdtAmlCode[] = {
0x4e,
0x54,
0x4c,
0x23,
0x8,
0x15,
0x11,
0x13,
0x20,
0x10,
@@ -4010,7 +4010,7 @@ static unsigned char AcpiDsdtAmlCode[] = {
0x53,
0x1,
0x10,
0x47,
0x42,
0x11,
0x5f,
0x53,
@@ -4243,7 +4243,7 @@ static unsigned char AcpiDsdtAmlCode[] = {
0x60,
0x5b,
0x82,
0x2e,
0x29,
0x50,
0x52,
0x45,
@@ -4253,16 +4253,11 @@ static unsigned char AcpiDsdtAmlCode[] = {
0x48,
0x49,
0x44,
0xd,
0xc,
0x41,
0x43,
0x50,
0x49,
0x30,
0x30,
0x30,
0x34,
0x0,
0xd0,
0xa,
0x6,
0x8,
0x5f,
0x43,

View File

@@ -3,12 +3,12 @@ static unsigned char Q35AcpiDsdtAmlCode[] = {
0x53,
0x44,
0x54,
0xd7,
0xd2,
0x1c,
0x0,
0x0,
0x1,
0x3e,
0x13,
0x42,
0x58,
0x50,
@@ -31,8 +31,8 @@ static unsigned char Q35AcpiDsdtAmlCode[] = {
0x4e,
0x54,
0x4c,
0x23,
0x8,
0x15,
0x11,
0x13,
0x20,
0x10,
@@ -6959,7 +6959,7 @@ static unsigned char Q35AcpiDsdtAmlCode[] = {
0x53,
0x1,
0x10,
0x47,
0x42,
0x11,
0x5f,
0x53,
@@ -7192,7 +7192,7 @@ static unsigned char Q35AcpiDsdtAmlCode[] = {
0x60,
0x5b,
0x82,
0x2e,
0x29,
0x50,
0x52,
0x45,
@@ -7202,16 +7202,11 @@ static unsigned char Q35AcpiDsdtAmlCode[] = {
0x48,
0x49,
0x44,
0xd,
0xc,
0x41,
0x43,
0x50,
0x49,
0x30,
0x30,
0x30,
0x34,
0x0,
0xd0,
0xa,
0x6,
0x8,
0x5f,
0x43,

View File

@@ -1602,7 +1602,7 @@ static bool cmd_smart(IDEState *s, uint8_t cmd)
case 2: /* extended self test */
s->smart_selftest_count++;
if (s->smart_selftest_count > 21) {
s->smart_selftest_count = 0;
s->smart_selftest_count = 1;
}
n = 2 + (s->smart_selftest_count - 1) * 24;
s->smart_selftest_data[n] = s->sector;

View File

@@ -677,7 +677,7 @@ static int virtio_net_handle_mac(VirtIONet *n, uint8_t cmd,
goto error;
}
if (in_use + mac_data.entries <= MAC_TABLE_ENTRIES) {
if (mac_data.entries <= MAC_TABLE_ENTRIES - in_use) {
s = iov_to_buf(iov, iov_cnt, 0, &macs[in_use * ETH_ALEN],
mac_data.entries * ETH_ALEN);
if (s != mac_data.entries * ETH_ALEN) {

View File

@@ -52,6 +52,9 @@
#define VMXNET3_DEVICE_VERSION 0x1
#define VMXNET3_DEVICE_REVISION 0x1
/* Number of interrupt vectors for non-MSIx modes */
#define VMXNET3_MAX_NMSIX_INTRS (1)
/* Macros for rings descriptors access */
#define VMXNET3_READ_TX_QUEUE_DESCR8(dpa, field) \
(vmw_shmem_ld8(dpa + offsetof(struct Vmxnet3_TxQueueDesc, field)))
@@ -1305,6 +1308,51 @@ static bool vmxnet3_verify_intx(VMXNET3State *s, int intx)
(pci_get_byte(s->parent_obj.config + PCI_INTERRUPT_PIN) - 1));
}
static void vmxnet3_validate_interrupt_idx(bool is_msix, int idx)
{
int max_ints = is_msix ? VMXNET3_MAX_INTRS : VMXNET3_MAX_NMSIX_INTRS;
if (idx >= max_ints) {
hw_error("Bad interrupt index: %d\n", idx);
}
}
static void vmxnet3_validate_interrupts(VMXNET3State *s)
{
int i;
VMW_CFPRN("Verifying event interrupt index (%d)", s->event_int_idx);
vmxnet3_validate_interrupt_idx(s->msix_used, s->event_int_idx);
for (i = 0; i < s->txq_num; i++) {
int idx = s->txq_descr[i].intr_idx;
VMW_CFPRN("Verifying TX queue %d interrupt index (%d)", i, idx);
vmxnet3_validate_interrupt_idx(s->msix_used, idx);
}
for (i = 0; i < s->rxq_num; i++) {
int idx = s->rxq_descr[i].intr_idx;
VMW_CFPRN("Verifying RX queue %d interrupt index (%d)", i, idx);
vmxnet3_validate_interrupt_idx(s->msix_used, idx);
}
}
static void vmxnet3_validate_queues(VMXNET3State *s)
{
/*
* txq_num and rxq_num are total number of queues
* configured by guest. These numbers must not
* exceed corresponding maximal values.
*/
if (s->txq_num > VMXNET3_DEVICE_MAX_TX_QUEUES) {
hw_error("Bad TX queues number: %d\n", s->txq_num);
}
if (s->rxq_num > VMXNET3_DEVICE_MAX_RX_QUEUES) {
hw_error("Bad RX queues number: %d\n", s->rxq_num);
}
}
static void vmxnet3_activate_device(VMXNET3State *s)
{
int i;
@@ -1351,7 +1399,7 @@ static void vmxnet3_activate_device(VMXNET3State *s)
VMXNET3_READ_DRV_SHARED8(s->drv_shmem, devRead.misc.numRxQueues);
VMW_CFPRN("Number of TX/RX queues %u/%u", s->txq_num, s->rxq_num);
assert(s->txq_num <= VMXNET3_DEVICE_MAX_TX_QUEUES);
vmxnet3_validate_queues(s);
qdescr_table_pa =
VMXNET3_READ_DRV_SHARED64(s->drv_shmem, devRead.misc.queueDescPA);
@@ -1447,6 +1495,8 @@ static void vmxnet3_activate_device(VMXNET3State *s)
sizeof(s->rxq_descr[i].rxq_stats));
}
vmxnet3_validate_interrupts(s);
/* Make sure everything is in place before device activation */
smp_wmb();
@@ -2005,7 +2055,6 @@ vmxnet3_cleanup_msix(VMXNET3State *s)
}
}
#define VMXNET3_MSI_NUM_VECTORS (1)
#define VMXNET3_MSI_OFFSET (0x50)
#define VMXNET3_USE_64BIT (true)
#define VMXNET3_PER_VECTOR_MASK (false)
@@ -2016,7 +2065,7 @@ vmxnet3_init_msi(VMXNET3State *s)
PCIDevice *d = PCI_DEVICE(s);
int res;
res = msi_init(d, VMXNET3_MSI_OFFSET, VMXNET3_MSI_NUM_VECTORS,
res = msi_init(d, VMXNET3_MSI_OFFSET, VMXNET3_MAX_NMSIX_INTRS,
VMXNET3_USE_64BIT, VMXNET3_PER_VECTOR_MASK);
if (0 > res) {
VMW_WRPRN("Failed to initialize MSI, error %d", res);
@@ -2342,6 +2391,9 @@ static int vmxnet3_post_load(void *opaque, int version_id)
}
}
vmxnet3_validate_queues(s);
vmxnet3_validate_interrupts(s);
return 0;
}

View File

@@ -145,9 +145,9 @@ static uint64_t raven_io_read(void *opaque, hwaddr addr,
if (size == 1) {
return buf[0];
} else if (size == 2) {
return lduw_p(buf);
return lduw_le_p(buf);
} else if (size == 4) {
return ldl_p(buf);
return ldl_le_p(buf);
} else {
g_assert_not_reached();
}
@@ -164,9 +164,9 @@ static void raven_io_write(void *opaque, hwaddr addr,
if (size == 1) {
buf[0] = val;
} else if (size == 2) {
stw_p(buf, val);
stw_le_p(buf, val);
} else if (size == 4) {
stl_p(buf, val);
stl_le_p(buf, val);
} else {
g_assert_not_reached();
}

View File

@@ -649,7 +649,7 @@ void ppce500_init(QEMUMachineInitArgs *args, PPCE500Params *params)
input = (qemu_irq *)env->irq_inputs;
irqs[i][OPENPIC_OUTPUT_INT] = input[PPCE500_INPUT_INT];
irqs[i][OPENPIC_OUTPUT_CINT] = input[PPCE500_INPUT_CINT];
env->spr[SPR_BOOKE_PIR] = cs->cpu_index = i;
env->spr_cb[SPR_BOOKE_PIR].default_value = cs->cpu_index = i;
env->mpic_iack = MPC8544_CCSRBAR_BASE +
MPC8544_MPIC_REGS_OFFSET + 0xa0;

View File

@@ -620,6 +620,13 @@ static void cpu_ppc_tb_start (CPUPPCState *env)
}
}
bool ppc_decr_clear_on_delivery(CPUPPCState *env)
{
ppc_tb_t *tb_env = env->tb_env;
int flags = PPC_DECR_UNDERFLOW_TRIGGERED | PPC_DECR_UNDERFLOW_LEVEL;
return ((tb_env->flags & flags) == PPC_DECR_UNDERFLOW_TRIGGERED);
}
static inline uint32_t _cpu_ppc_load_decr(CPUPPCState *env, uint64_t next)
{
ppc_tb_t *tb_env = env->tb_env;
@@ -677,6 +684,11 @@ static inline void cpu_ppc_decr_excp(PowerPCCPU *cpu)
ppc_set_irq(cpu, PPC_INTERRUPT_DECR, 1);
}
static inline void cpu_ppc_decr_lower(PowerPCCPU *cpu)
{
ppc_set_irq(cpu, PPC_INTERRUPT_DECR, 0);
}
static inline void cpu_ppc_hdecr_excp(PowerPCCPU *cpu)
{
/* Raise it */
@@ -684,11 +696,16 @@ static inline void cpu_ppc_hdecr_excp(PowerPCCPU *cpu)
ppc_set_irq(cpu, PPC_INTERRUPT_HDECR, 1);
}
static inline void cpu_ppc_hdecr_lower(PowerPCCPU *cpu)
{
ppc_set_irq(cpu, PPC_INTERRUPT_HDECR, 0);
}
static void __cpu_ppc_store_decr(PowerPCCPU *cpu, uint64_t *nextp,
QEMUTimer *timer,
void (*raise_excp)(PowerPCCPU *),
uint32_t decr, uint32_t value,
int is_excp)
void (*raise_excp)(void *),
void (*lower_excp)(PowerPCCPU *),
uint32_t decr, uint32_t value)
{
CPUPPCState *env = &cpu->env;
ppc_tb_t *tb_env = env->tb_env;
@@ -702,59 +719,74 @@ static void __cpu_ppc_store_decr(PowerPCCPU *cpu, uint64_t *nextp,
return;
}
/*
* Going from 2 -> 1, 1 -> 0 or 0 -> -1 is the event to generate a DEC
* interrupt.
*
* If we get a really small DEC value, we can assume that by the time we
* handled it we should inject an interrupt already.
*
* On MSB level based DEC implementations the MSB always means the interrupt
* is pending, so raise it on those.
*
* On MSB edge based DEC implementations the MSB going from 0 -> 1 triggers
* an edge interrupt, so raise it here too.
*/
if ((value < 3) ||
((tb_env->flags & PPC_DECR_UNDERFLOW_LEVEL) && (value & 0x80000000)) ||
((tb_env->flags & PPC_DECR_UNDERFLOW_TRIGGERED) && (value & 0x80000000)
&& !(decr & 0x80000000))) {
(*raise_excp)(cpu);
return;
}
/* On MSB level based systems a 0 for the MSB stops interrupt delivery */
if (!(value & 0x80000000) && (tb_env->flags & PPC_DECR_UNDERFLOW_LEVEL)) {
(*lower_excp)(cpu);
}
/* Calculate the next timer event */
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
next = now + muldiv64(value, get_ticks_per_sec(), tb_env->decr_freq);
if (is_excp) {
next += *nextp - now;
}
if (next == now) {
next++;
}
*nextp = next;
/* Adjust timer */
timer_mod(timer, next);
/* If we set a negative value and the decrementer was positive, raise an
* exception.
*/
if ((tb_env->flags & PPC_DECR_UNDERFLOW_TRIGGERED)
&& (value & 0x80000000)
&& !(decr & 0x80000000)) {
(*raise_excp)(cpu);
}
}
static inline void _cpu_ppc_store_decr(PowerPCCPU *cpu, uint32_t decr,
uint32_t value, int is_excp)
uint32_t value)
{
ppc_tb_t *tb_env = cpu->env.tb_env;
__cpu_ppc_store_decr(cpu, &tb_env->decr_next, tb_env->decr_timer,
&cpu_ppc_decr_excp, decr, value, is_excp);
tb_env->decr_timer->cb, &cpu_ppc_decr_lower, decr,
value);
}
void cpu_ppc_store_decr (CPUPPCState *env, uint32_t value)
{
PowerPCCPU *cpu = ppc_env_get_cpu(env);
_cpu_ppc_store_decr(cpu, cpu_ppc_load_decr(env), value, 0);
_cpu_ppc_store_decr(cpu, cpu_ppc_load_decr(env), value);
}
static void cpu_ppc_decr_cb(void *opaque)
{
PowerPCCPU *cpu = opaque;
_cpu_ppc_store_decr(cpu, 0x00000000, 0xFFFFFFFF, 1);
cpu_ppc_decr_excp(cpu);
}
static inline void _cpu_ppc_store_hdecr(PowerPCCPU *cpu, uint32_t hdecr,
uint32_t value, int is_excp)
uint32_t value)
{
ppc_tb_t *tb_env = cpu->env.tb_env;
if (tb_env->hdecr_timer != NULL) {
__cpu_ppc_store_decr(cpu, &tb_env->hdecr_next, tb_env->hdecr_timer,
&cpu_ppc_hdecr_excp, hdecr, value, is_excp);
tb_env->hdecr_timer->cb, &cpu_ppc_hdecr_lower,
hdecr, value);
}
}
@@ -762,14 +794,14 @@ void cpu_ppc_store_hdecr (CPUPPCState *env, uint32_t value)
{
PowerPCCPU *cpu = ppc_env_get_cpu(env);
_cpu_ppc_store_hdecr(cpu, cpu_ppc_load_hdecr(env), value, 0);
_cpu_ppc_store_hdecr(cpu, cpu_ppc_load_hdecr(env), value);
}
static void cpu_ppc_hdecr_cb(void *opaque)
{
PowerPCCPU *cpu = opaque;
_cpu_ppc_store_hdecr(cpu, 0x00000000, 0xFFFFFFFF, 1);
cpu_ppc_hdecr_excp(cpu);
}
static void cpu_ppc_store_purr(PowerPCCPU *cpu, uint64_t value)
@@ -792,8 +824,8 @@ static void cpu_ppc_set_tb_clk (void *opaque, uint32_t freq)
* if a decrementer exception is pending when it enables msr_ee at startup,
* it's not ready to handle it...
*/
_cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 0);
_cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF, 0);
_cpu_ppc_store_decr(cpu, 0xFFFFFFFF, 0xFFFFFFFF);
_cpu_ppc_store_hdecr(cpu, 0xFFFFFFFF, 0xFFFFFFFF);
cpu_ppc_store_purr(cpu, 0x0000000000000000ULL);
}
@@ -806,6 +838,10 @@ clk_setup_cb cpu_ppc_tb_init (CPUPPCState *env, uint32_t freq)
tb_env = g_malloc0(sizeof(ppc_tb_t));
env->tb_env = tb_env;
tb_env->flags = PPC_DECR_UNDERFLOW_TRIGGERED;
if (env->insns_flags & PPC_SEGMENT_64B) {
/* All Book3S 64bit CPUs implement level based DEC logic */
tb_env->flags |= PPC_DECR_UNDERFLOW_LEVEL;
}
/* Create new timer */
tb_env->decr_timer = timer_new_ns(QEMU_CLOCK_VIRTUAL, &cpu_ppc_decr_cb, cpu);
if (0) {

View File

@@ -65,9 +65,9 @@ static void spin_reset(void *opaque)
for (i = 0; i < MAX_CPUS; i++) {
SpinInfo *info = &s->spin[i];
info->pir = i;
info->r3 = i;
info->addr = 1;
stl_p(&info->pir, i);
stq_p(&info->r3, i);
stq_p(&info->addr, 1);
}
}

View File

@@ -342,6 +342,7 @@ uint32 float32_to_uint32( float32 STATUS_PARAM );
uint32 float32_to_uint32_round_to_zero( float32 STATUS_PARAM );
int64 float32_to_int64( float32 STATUS_PARAM );
uint64 float32_to_uint64(float32 STATUS_PARAM);
uint64 float32_to_uint64_round_to_zero(float32 STATUS_PARAM);
int64 float32_to_int64_round_to_zero( float32 STATUS_PARAM );
float64 float32_to_float64( float32 STATUS_PARAM );
floatx80 float32_to_floatx80( float32 STATUS_PARAM );

View File

@@ -44,6 +44,9 @@ struct ppc_tb_t {
#define PPC_DECR_ZERO_TRIGGERED (1 << 3) /* Decr interrupt triggered when
* the decrementer reaches zero.
*/
#define PPC_DECR_UNDERFLOW_LEVEL (1 << 4) /* Decr interrupt active when
* the most significant bit is 1.
*/
uint64_t cpu_ppc_get_tb(ppc_tb_t *tb_env, uint64_t vmclk, int64_t tb_offset);
clk_setup_cb cpu_ppc_tb_init (CPUPPCState *env, uint32_t freq);

View File

@@ -441,7 +441,7 @@ static int kvm_physical_sync_dirty_bitmap(MemoryRegionSection *section)
d.slot = mem->slot;
if (kvm_vm_ioctl(s, KVM_GET_DIRTY_LOG, &d) < 0) {
if (kvm_vm_ioctl(s, KVM_GET_DIRTY_LOG, &d) == -1) {
DPRINTF("ioctl failed %d\n", errno);
ret = -1;
break;

View File

@@ -17,7 +17,7 @@
- SLOF (Slimline Open Firmware) is a free IEEE 1275 Open Firmware
implementation for certain IBM POWER hardware. The sources are at
https://github.com/aik/SLOF, and the image currently in qemu is
built from git tag qemu-slof-20140304.
built from git tag qemu-slof-20140404.
- sgabios (the Serial Graphics Adapter option ROM) provides a means for
legacy x86 software to communicate with an attached serial console as

Binary file not shown.

View File

@@ -75,10 +75,13 @@ static void errmsg_dialog(DWORD err, const char *text, const char *opt = "")
#define chk(status) _chk(hr, status, "Failed to " #status, out)
#if !defined(__MINGW64_VERSION_MAJOR) || !defined(__MINGW64_VERSION_MINOR) || \
__MINGW64_VERSION_MAJOR * 100 + __MINGW64_VERSION_MINOR < 301
void __stdcall _com_issue_error(HRESULT hr)
{
errmsg(hr, "Unexpected error in COM");
}
#endif
template<class T>
HRESULT put_Value(ICatalogObject *pObj, LPCWSTR name, T val)

View File

@@ -1225,7 +1225,8 @@ Object *object_resolve_path_component(Object *parent, const gchar *part)
}
if (object_property_is_link(prop)) {
return *(Object **)prop->opaque;
LinkProperty *lprop = prop->opaque;
return *lprop->child;
} else if (object_property_is_child(prop)) {
return prop->opaque;
} else {

View File

@@ -1133,6 +1133,7 @@ uint64_t cpu_ppc_load_atbl (CPUPPCState *env);
uint32_t cpu_ppc_load_atbu (CPUPPCState *env);
void cpu_ppc_store_atbl (CPUPPCState *env, uint32_t value);
void cpu_ppc_store_atbu (CPUPPCState *env, uint32_t value);
bool ppc_decr_clear_on_delivery(CPUPPCState *env);
uint32_t cpu_ppc_load_decr (CPUPPCState *env);
void cpu_ppc_store_decr (CPUPPCState *env, uint32_t value);
uint32_t cpu_ppc_load_hdecr (CPUPPCState *env);

View File

@@ -723,7 +723,6 @@ void ppc_hw_interrupt(CPUPPCState *env)
if ((msr_ee != 0 || msr_hv == 0 || msr_pr != 0) && hdice != 0) {
/* Hypervisor decrementer exception */
if (env->pending_interrupts & (1 << PPC_INTERRUPT_HDECR)) {
env->pending_interrupts &= ~(1 << PPC_INTERRUPT_HDECR);
powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_HDECR);
return;
}
@@ -767,7 +766,9 @@ void ppc_hw_interrupt(CPUPPCState *env)
}
/* Decrementer exception */
if (env->pending_interrupts & (1 << PPC_INTERRUPT_DECR)) {
env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DECR);
if (ppc_decr_clear_on_delivery(env)) {
env->pending_interrupts &= ~(1 << PPC_INTERRUPT_DECR);
}
powerpc_excp(cpu, env->excp_model, POWERPC_EXCP_DECR);
return;
}

View File

@@ -1782,11 +1782,19 @@ typedef union _ppc_vsr_t {
float64 f64[2];
} ppc_vsr_t;
#if defined(HOST_WORDS_BIGENDIAN)
#define VsrW(i) u32[i]
#define VsrD(i) u64[i]
#else
#define VsrW(i) u32[3-(i)]
#define VsrD(i) u64[1-(i)]
#endif
static void getVSR(int n, ppc_vsr_t *vsr, CPUPPCState *env)
{
if (n < 32) {
vsr->f64[0] = env->fpr[n];
vsr->u64[1] = env->vsr[n];
vsr->VsrD(0) = env->fpr[n];
vsr->VsrD(1) = env->vsr[n];
} else {
vsr->u64[0] = env->avr[n-32].u64[0];
vsr->u64[1] = env->avr[n-32].u64[1];
@@ -1796,8 +1804,8 @@ static void getVSR(int n, ppc_vsr_t *vsr, CPUPPCState *env)
static void putVSR(int n, ppc_vsr_t *vsr, CPUPPCState *env)
{
if (n < 32) {
env->fpr[n] = vsr->f64[0];
env->vsr[n] = vsr->u64[1];
env->fpr[n] = vsr->VsrD(0);
env->vsr[n] = vsr->VsrD(1);
} else {
env->avr[n-32].u64[0] = vsr->u64[0];
env->avr[n-32].u64[1] = vsr->u64[1];
@@ -1812,7 +1820,7 @@ static void putVSR(int n, ppc_vsr_t *vsr, CPUPPCState *env)
* op - operation (add or sub)
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_ADD_SUB(name, op, nels, tp, fld, sfprf, r2sp) \
@@ -1829,44 +1837,44 @@ void helper_##name(CPUPPCState *env, uint32_t opcode) \
for (i = 0; i < nels; i++) { \
float_status tstat = env->fp_status; \
set_float_exception_flags(0, &tstat); \
xt.fld[i] = tp##_##op(xa.fld[i], xb.fld[i], &tstat); \
xt.fld = tp##_##op(xa.fld, xb.fld, &tstat); \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if (tp##_is_infinity(xa.fld[i]) && tp##_is_infinity(xb.fld[i])) {\
if (tp##_is_infinity(xa.fld) && tp##_is_infinity(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXISI, sfprf); \
} else if (tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(xb.fld[i])) { \
} else if (tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
} \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
putVSR(xT(opcode), &xt, env); \
helper_float_check_status(env); \
}
VSX_ADD_SUB(xsadddp, add, 1, float64, f64, 1, 0)
VSX_ADD_SUB(xsaddsp, add, 1, float64, f64, 1, 1)
VSX_ADD_SUB(xvadddp, add, 2, float64, f64, 0, 0)
VSX_ADD_SUB(xvaddsp, add, 4, float32, f32, 0, 0)
VSX_ADD_SUB(xssubdp, sub, 1, float64, f64, 1, 0)
VSX_ADD_SUB(xssubsp, sub, 1, float64, f64, 1, 1)
VSX_ADD_SUB(xvsubdp, sub, 2, float64, f64, 0, 0)
VSX_ADD_SUB(xvsubsp, sub, 4, float32, f32, 0, 0)
VSX_ADD_SUB(xsadddp, add, 1, float64, VsrD(0), 1, 0)
VSX_ADD_SUB(xsaddsp, add, 1, float64, VsrD(0), 1, 1)
VSX_ADD_SUB(xvadddp, add, 2, float64, VsrD(i), 0, 0)
VSX_ADD_SUB(xvaddsp, add, 4, float32, VsrW(i), 0, 0)
VSX_ADD_SUB(xssubdp, sub, 1, float64, VsrD(0), 1, 0)
VSX_ADD_SUB(xssubsp, sub, 1, float64, VsrD(0), 1, 1)
VSX_ADD_SUB(xvsubdp, sub, 2, float64, VsrD(i), 0, 0)
VSX_ADD_SUB(xvsubsp, sub, 4, float32, VsrW(i), 0, 0)
/* VSX_MUL - VSX floating point multiply
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_MUL(op, nels, tp, fld, sfprf, r2sp) \
@@ -1883,25 +1891,25 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
for (i = 0; i < nels; i++) { \
float_status tstat = env->fp_status; \
set_float_exception_flags(0, &tstat); \
xt.fld[i] = tp##_mul(xa.fld[i], xb.fld[i], &tstat); \
xt.fld = tp##_mul(xa.fld, xb.fld, &tstat); \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if ((tp##_is_infinity(xa.fld[i]) && tp##_is_zero(xb.fld[i])) || \
(tp##_is_infinity(xb.fld[i]) && tp##_is_zero(xa.fld[i]))) { \
if ((tp##_is_infinity(xa.fld) && tp##_is_zero(xb.fld)) || \
(tp##_is_infinity(xb.fld) && tp##_is_zero(xa.fld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXIMZ, sfprf); \
} else if (tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(xb.fld[i])) { \
} else if (tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
} \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -1909,16 +1917,16 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_MUL(xsmuldp, 1, float64, f64, 1, 0)
VSX_MUL(xsmulsp, 1, float64, f64, 1, 1)
VSX_MUL(xvmuldp, 2, float64, f64, 0, 0)
VSX_MUL(xvmulsp, 4, float32, f32, 0, 0)
VSX_MUL(xsmuldp, 1, float64, VsrD(0), 1, 0)
VSX_MUL(xsmulsp, 1, float64, VsrD(0), 1, 1)
VSX_MUL(xvmuldp, 2, float64, VsrD(i), 0, 0)
VSX_MUL(xvmulsp, 4, float32, VsrW(i), 0, 0)
/* VSX_DIV - VSX floating point divide
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_DIV(op, nels, tp, fld, sfprf, r2sp) \
@@ -1935,27 +1943,27 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
for (i = 0; i < nels; i++) { \
float_status tstat = env->fp_status; \
set_float_exception_flags(0, &tstat); \
xt.fld[i] = tp##_div(xa.fld[i], xb.fld[i], &tstat); \
xt.fld = tp##_div(xa.fld, xb.fld, &tstat); \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if (tp##_is_infinity(xa.fld[i]) && tp##_is_infinity(xb.fld[i])) { \
if (tp##_is_infinity(xa.fld) && tp##_is_infinity(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXIDI, sfprf); \
} else if (tp##_is_zero(xa.fld[i]) && \
tp##_is_zero(xb.fld[i])) { \
} else if (tp##_is_zero(xa.fld) && \
tp##_is_zero(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXZDZ, sfprf); \
} else if (tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(xb.fld[i])) { \
} else if (tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
} \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -1963,16 +1971,16 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_DIV(xsdivdp, 1, float64, f64, 1, 0)
VSX_DIV(xsdivsp, 1, float64, f64, 1, 1)
VSX_DIV(xvdivdp, 2, float64, f64, 0, 0)
VSX_DIV(xvdivsp, 4, float32, f32, 0, 0)
VSX_DIV(xsdivdp, 1, float64, VsrD(0), 1, 0)
VSX_DIV(xsdivsp, 1, float64, VsrD(0), 1, 1)
VSX_DIV(xvdivdp, 2, float64, VsrD(i), 0, 0)
VSX_DIV(xvdivsp, 4, float32, VsrW(i), 0, 0)
/* VSX_RE - VSX floating point reciprocal estimate
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_RE(op, nels, tp, fld, sfprf, r2sp) \
@@ -1986,17 +1994,17 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_reset_fpstatus(env); \
\
for (i = 0; i < nels; i++) { \
if (unlikely(tp##_is_signaling_nan(xb.fld[i]))) { \
if (unlikely(tp##_is_signaling_nan(xb.fld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
xt.fld[i] = tp##_div(tp##_one, xb.fld[i], &env->fp_status); \
xt.fld = tp##_div(tp##_one, xb.fld, &env->fp_status); \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[0], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -2004,16 +2012,16 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_RE(xsredp, 1, float64, f64, 1, 0)
VSX_RE(xsresp, 1, float64, f64, 1, 1)
VSX_RE(xvredp, 2, float64, f64, 0, 0)
VSX_RE(xvresp, 4, float32, f32, 0, 0)
VSX_RE(xsredp, 1, float64, VsrD(0), 1, 0)
VSX_RE(xsresp, 1, float64, VsrD(0), 1, 1)
VSX_RE(xvredp, 2, float64, VsrD(i), 0, 0)
VSX_RE(xvresp, 4, float32, VsrW(i), 0, 0)
/* VSX_SQRT - VSX floating point square root
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_SQRT(op, nels, tp, fld, sfprf, r2sp) \
@@ -2029,23 +2037,23 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
for (i = 0; i < nels; i++) { \
float_status tstat = env->fp_status; \
set_float_exception_flags(0, &tstat); \
xt.fld[i] = tp##_sqrt(xb.fld[i], &tstat); \
xt.fld = tp##_sqrt(xb.fld, &tstat); \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if (tp##_is_neg(xb.fld[i]) && !tp##_is_zero(xb.fld[i])) { \
if (tp##_is_neg(xb.fld) && !tp##_is_zero(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSQRT, sfprf); \
} else if (tp##_is_signaling_nan(xb.fld[i])) { \
} else if (tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
} \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -2053,16 +2061,16 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_SQRT(xssqrtdp, 1, float64, f64, 1, 0)
VSX_SQRT(xssqrtsp, 1, float64, f64, 1, 1)
VSX_SQRT(xvsqrtdp, 2, float64, f64, 0, 0)
VSX_SQRT(xvsqrtsp, 4, float32, f32, 0, 0)
VSX_SQRT(xssqrtdp, 1, float64, VsrD(0), 1, 0)
VSX_SQRT(xssqrtsp, 1, float64, VsrD(0), 1, 1)
VSX_SQRT(xvsqrtdp, 2, float64, VsrD(i), 0, 0)
VSX_SQRT(xvsqrtsp, 4, float32, VsrW(i), 0, 0)
/* VSX_RSQRTE - VSX floating point reciprocal square root estimate
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* sfprf - set FPRF
*/
#define VSX_RSQRTE(op, nels, tp, fld, sfprf, r2sp) \
@@ -2078,24 +2086,24 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
for (i = 0; i < nels; i++) { \
float_status tstat = env->fp_status; \
set_float_exception_flags(0, &tstat); \
xt.fld[i] = tp##_sqrt(xb.fld[i], &tstat); \
xt.fld[i] = tp##_div(tp##_one, xt.fld[i], &tstat); \
xt.fld = tp##_sqrt(xb.fld, &tstat); \
xt.fld = tp##_div(tp##_one, xt.fld, &tstat); \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if (tp##_is_neg(xb.fld[i]) && !tp##_is_zero(xb.fld[i])) { \
if (tp##_is_neg(xb.fld) && !tp##_is_zero(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSQRT, sfprf); \
} else if (tp##_is_signaling_nan(xb.fld[i])) { \
} else if (tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
} \
} \
\
if (r2sp) { \
xt.fld[i] = helper_frsp(env, xt.fld[i]); \
xt.fld = helper_frsp(env, xt.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -2103,16 +2111,16 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_RSQRTE(xsrsqrtedp, 1, float64, f64, 1, 0)
VSX_RSQRTE(xsrsqrtesp, 1, float64, f64, 1, 1)
VSX_RSQRTE(xvrsqrtedp, 2, float64, f64, 0, 0)
VSX_RSQRTE(xvrsqrtesp, 4, float32, f32, 0, 0)
VSX_RSQRTE(xsrsqrtedp, 1, float64, VsrD(0), 1, 0)
VSX_RSQRTE(xsrsqrtesp, 1, float64, VsrD(0), 1, 1)
VSX_RSQRTE(xvrsqrtedp, 2, float64, VsrD(i), 0, 0)
VSX_RSQRTE(xvrsqrtesp, 4, float32, VsrW(i), 0, 0)
/* VSX_TDIV - VSX floating point test for divide
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* emin - minimum unbiased exponent
* emax - maximum unbiased exponent
* nbits - number of fraction bits
@@ -2129,28 +2137,28 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xB(opcode), &xb, env); \
\
for (i = 0; i < nels; i++) { \
if (unlikely(tp##_is_infinity(xa.fld[i]) || \
tp##_is_infinity(xb.fld[i]) || \
tp##_is_zero(xb.fld[i]))) { \
if (unlikely(tp##_is_infinity(xa.fld) || \
tp##_is_infinity(xb.fld) || \
tp##_is_zero(xb.fld))) { \
fe_flag = 1; \
fg_flag = 1; \
} else { \
int e_a = ppc_##tp##_get_unbiased_exp(xa.fld[i]); \
int e_b = ppc_##tp##_get_unbiased_exp(xb.fld[i]); \
int e_a = ppc_##tp##_get_unbiased_exp(xa.fld); \
int e_b = ppc_##tp##_get_unbiased_exp(xb.fld); \
\
if (unlikely(tp##_is_any_nan(xa.fld[i]) || \
tp##_is_any_nan(xb.fld[i]))) { \
if (unlikely(tp##_is_any_nan(xa.fld) || \
tp##_is_any_nan(xb.fld))) { \
fe_flag = 1; \
} else if ((e_b <= emin) || (e_b >= (emax-2))) { \
fe_flag = 1; \
} else if (!tp##_is_zero(xa.fld[i]) && \
} else if (!tp##_is_zero(xa.fld) && \
(((e_a - e_b) >= emax) || \
((e_a - e_b) <= (emin+1)) || \
(e_a <= (emin+nbits)))) { \
fe_flag = 1; \
} \
\
if (unlikely(tp##_is_zero_or_denormal(xb.fld[i]))) { \
if (unlikely(tp##_is_zero_or_denormal(xb.fld))) { \
/* XB is not zero because of the above check and */ \
/* so must be denormalized. */ \
fg_flag = 1; \
@@ -2161,15 +2169,15 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
env->crf[BF(opcode)] = 0x8 | (fg_flag ? 4 : 0) | (fe_flag ? 2 : 0); \
}
VSX_TDIV(xstdivdp, 1, float64, f64, -1022, 1023, 52)
VSX_TDIV(xvtdivdp, 2, float64, f64, -1022, 1023, 52)
VSX_TDIV(xvtdivsp, 4, float32, f32, -126, 127, 23)
VSX_TDIV(xstdivdp, 1, float64, VsrD(0), -1022, 1023, 52)
VSX_TDIV(xvtdivdp, 2, float64, VsrD(i), -1022, 1023, 52)
VSX_TDIV(xvtdivsp, 4, float32, VsrW(i), -126, 127, 23)
/* VSX_TSQRT - VSX floating point test for square root
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* emin - minimum unbiased exponent
* emax - maximum unbiased exponent
* nbits - number of fraction bits
@@ -2186,25 +2194,25 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xB(opcode), &xb, env); \
\
for (i = 0; i < nels; i++) { \
if (unlikely(tp##_is_infinity(xb.fld[i]) || \
tp##_is_zero(xb.fld[i]))) { \
if (unlikely(tp##_is_infinity(xb.fld) || \
tp##_is_zero(xb.fld))) { \
fe_flag = 1; \
fg_flag = 1; \
} else { \
int e_b = ppc_##tp##_get_unbiased_exp(xb.fld[i]); \
int e_b = ppc_##tp##_get_unbiased_exp(xb.fld); \
\
if (unlikely(tp##_is_any_nan(xb.fld[i]))) { \
if (unlikely(tp##_is_any_nan(xb.fld))) { \
fe_flag = 1; \
} else if (unlikely(tp##_is_zero(xb.fld[i]))) { \
} else if (unlikely(tp##_is_zero(xb.fld))) { \
fe_flag = 1; \
} else if (unlikely(tp##_is_neg(xb.fld[i]))) { \
} else if (unlikely(tp##_is_neg(xb.fld))) { \
fe_flag = 1; \
} else if (!tp##_is_zero(xb.fld[i]) && \
} else if (!tp##_is_zero(xb.fld) && \
(e_b <= (emin+nbits))) { \
fe_flag = 1; \
} \
\
if (unlikely(tp##_is_zero_or_denormal(xb.fld[i]))) { \
if (unlikely(tp##_is_zero_or_denormal(xb.fld))) { \
/* XB is not zero because of the above check and */ \
/* therefore must be denormalized. */ \
fg_flag = 1; \
@@ -2215,15 +2223,15 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
env->crf[BF(opcode)] = 0x8 | (fg_flag ? 4 : 0) | (fe_flag ? 2 : 0); \
}
VSX_TSQRT(xstsqrtdp, 1, float64, f64, -1022, 52)
VSX_TSQRT(xvtsqrtdp, 2, float64, f64, -1022, 52)
VSX_TSQRT(xvtsqrtsp, 4, float32, f32, -126, 23)
VSX_TSQRT(xstsqrtdp, 1, float64, VsrD(0), -1022, 52)
VSX_TSQRT(xvtsqrtdp, 2, float64, VsrD(i), -1022, 52)
VSX_TSQRT(xvtsqrtsp, 4, float32, VsrW(i), -126, 23)
/* VSX_MADD - VSX floating point muliply/add variations
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* maddflgs - flags for the float*muladd routine that control the
* various forms (madd, msub, nmadd, nmsub)
* afrm - A form (1=A, 0=M)
@@ -2259,43 +2267,43 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
/* Avoid double rounding errors by rounding the intermediate */ \
/* result to odd. */ \
set_float_rounding_mode(float_round_to_zero, &tstat); \
xt_out.fld[i] = tp##_muladd(xa.fld[i], b->fld[i], c->fld[i], \
xt_out.fld = tp##_muladd(xa.fld, b->fld, c->fld, \
maddflgs, &tstat); \
xt_out.fld[i] |= (get_float_exception_flags(&tstat) & \
xt_out.fld |= (get_float_exception_flags(&tstat) & \
float_flag_inexact) != 0; \
} else { \
xt_out.fld[i] = tp##_muladd(xa.fld[i], b->fld[i], c->fld[i], \
xt_out.fld = tp##_muladd(xa.fld, b->fld, c->fld, \
maddflgs, &tstat); \
} \
env->fp_status.float_exception_flags |= tstat.float_exception_flags; \
\
if (unlikely(tstat.float_exception_flags & float_flag_invalid)) { \
if (tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(b->fld[i]) || \
tp##_is_signaling_nan(c->fld[i])) { \
if (tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(b->fld) || \
tp##_is_signaling_nan(c->fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, sfprf); \
tstat.float_exception_flags &= ~float_flag_invalid; \
} \
if ((tp##_is_infinity(xa.fld[i]) && tp##_is_zero(b->fld[i])) || \
(tp##_is_zero(xa.fld[i]) && tp##_is_infinity(b->fld[i]))) { \
xt_out.fld[i] = float64_to_##tp(fload_invalid_op_excp(env, \
if ((tp##_is_infinity(xa.fld) && tp##_is_zero(b->fld)) || \
(tp##_is_zero(xa.fld) && tp##_is_infinity(b->fld))) { \
xt_out.fld = float64_to_##tp(fload_invalid_op_excp(env, \
POWERPC_EXCP_FP_VXIMZ, sfprf), &env->fp_status); \
tstat.float_exception_flags &= ~float_flag_invalid; \
} \
if ((tstat.float_exception_flags & float_flag_invalid) && \
((tp##_is_infinity(xa.fld[i]) || \
tp##_is_infinity(b->fld[i])) && \
tp##_is_infinity(c->fld[i]))) { \
((tp##_is_infinity(xa.fld) || \
tp##_is_infinity(b->fld)) && \
tp##_is_infinity(c->fld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXISI, sfprf); \
} \
} \
\
if (r2sp) { \
xt_out.fld[i] = helper_frsp(env, xt_out.fld[i]); \
xt_out.fld = helper_frsp(env, xt_out.fld); \
} \
\
if (sfprf) { \
helper_compute_fprf(env, xt_out.fld[i], sfprf); \
helper_compute_fprf(env, xt_out.fld, sfprf); \
} \
} \
putVSR(xT(opcode), &xt_out, env); \
@@ -2307,41 +2315,41 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
#define NMADD_FLGS float_muladd_negate_result
#define NMSUB_FLGS (float_muladd_negate_c | float_muladd_negate_result)
VSX_MADD(xsmaddadp, 1, float64, f64, MADD_FLGS, 1, 1, 0)
VSX_MADD(xsmaddmdp, 1, float64, f64, MADD_FLGS, 0, 1, 0)
VSX_MADD(xsmsubadp, 1, float64, f64, MSUB_FLGS, 1, 1, 0)
VSX_MADD(xsmsubmdp, 1, float64, f64, MSUB_FLGS, 0, 1, 0)
VSX_MADD(xsnmaddadp, 1, float64, f64, NMADD_FLGS, 1, 1, 0)
VSX_MADD(xsnmaddmdp, 1, float64, f64, NMADD_FLGS, 0, 1, 0)
VSX_MADD(xsnmsubadp, 1, float64, f64, NMSUB_FLGS, 1, 1, 0)
VSX_MADD(xsnmsubmdp, 1, float64, f64, NMSUB_FLGS, 0, 1, 0)
VSX_MADD(xsmaddadp, 1, float64, VsrD(0), MADD_FLGS, 1, 1, 0)
VSX_MADD(xsmaddmdp, 1, float64, VsrD(0), MADD_FLGS, 0, 1, 0)
VSX_MADD(xsmsubadp, 1, float64, VsrD(0), MSUB_FLGS, 1, 1, 0)
VSX_MADD(xsmsubmdp, 1, float64, VsrD(0), MSUB_FLGS, 0, 1, 0)
VSX_MADD(xsnmaddadp, 1, float64, VsrD(0), NMADD_FLGS, 1, 1, 0)
VSX_MADD(xsnmaddmdp, 1, float64, VsrD(0), NMADD_FLGS, 0, 1, 0)
VSX_MADD(xsnmsubadp, 1, float64, VsrD(0), NMSUB_FLGS, 1, 1, 0)
VSX_MADD(xsnmsubmdp, 1, float64, VsrD(0), NMSUB_FLGS, 0, 1, 0)
VSX_MADD(xsmaddasp, 1, float64, f64, MADD_FLGS, 1, 1, 1)
VSX_MADD(xsmaddmsp, 1, float64, f64, MADD_FLGS, 0, 1, 1)
VSX_MADD(xsmsubasp, 1, float64, f64, MSUB_FLGS, 1, 1, 1)
VSX_MADD(xsmsubmsp, 1, float64, f64, MSUB_FLGS, 0, 1, 1)
VSX_MADD(xsnmaddasp, 1, float64, f64, NMADD_FLGS, 1, 1, 1)
VSX_MADD(xsnmaddmsp, 1, float64, f64, NMADD_FLGS, 0, 1, 1)
VSX_MADD(xsnmsubasp, 1, float64, f64, NMSUB_FLGS, 1, 1, 1)
VSX_MADD(xsnmsubmsp, 1, float64, f64, NMSUB_FLGS, 0, 1, 1)
VSX_MADD(xsmaddasp, 1, float64, VsrD(0), MADD_FLGS, 1, 1, 1)
VSX_MADD(xsmaddmsp, 1, float64, VsrD(0), MADD_FLGS, 0, 1, 1)
VSX_MADD(xsmsubasp, 1, float64, VsrD(0), MSUB_FLGS, 1, 1, 1)
VSX_MADD(xsmsubmsp, 1, float64, VsrD(0), MSUB_FLGS, 0, 1, 1)
VSX_MADD(xsnmaddasp, 1, float64, VsrD(0), NMADD_FLGS, 1, 1, 1)
VSX_MADD(xsnmaddmsp, 1, float64, VsrD(0), NMADD_FLGS, 0, 1, 1)
VSX_MADD(xsnmsubasp, 1, float64, VsrD(0), NMSUB_FLGS, 1, 1, 1)
VSX_MADD(xsnmsubmsp, 1, float64, VsrD(0), NMSUB_FLGS, 0, 1, 1)
VSX_MADD(xvmaddadp, 2, float64, f64, MADD_FLGS, 1, 0, 0)
VSX_MADD(xvmaddmdp, 2, float64, f64, MADD_FLGS, 0, 0, 0)
VSX_MADD(xvmsubadp, 2, float64, f64, MSUB_FLGS, 1, 0, 0)
VSX_MADD(xvmsubmdp, 2, float64, f64, MSUB_FLGS, 0, 0, 0)
VSX_MADD(xvnmaddadp, 2, float64, f64, NMADD_FLGS, 1, 0, 0)
VSX_MADD(xvnmaddmdp, 2, float64, f64, NMADD_FLGS, 0, 0, 0)
VSX_MADD(xvnmsubadp, 2, float64, f64, NMSUB_FLGS, 1, 0, 0)
VSX_MADD(xvnmsubmdp, 2, float64, f64, NMSUB_FLGS, 0, 0, 0)
VSX_MADD(xvmaddadp, 2, float64, VsrD(i), MADD_FLGS, 1, 0, 0)
VSX_MADD(xvmaddmdp, 2, float64, VsrD(i), MADD_FLGS, 0, 0, 0)
VSX_MADD(xvmsubadp, 2, float64, VsrD(i), MSUB_FLGS, 1, 0, 0)
VSX_MADD(xvmsubmdp, 2, float64, VsrD(i), MSUB_FLGS, 0, 0, 0)
VSX_MADD(xvnmaddadp, 2, float64, VsrD(i), NMADD_FLGS, 1, 0, 0)
VSX_MADD(xvnmaddmdp, 2, float64, VsrD(i), NMADD_FLGS, 0, 0, 0)
VSX_MADD(xvnmsubadp, 2, float64, VsrD(i), NMSUB_FLGS, 1, 0, 0)
VSX_MADD(xvnmsubmdp, 2, float64, VsrD(i), NMSUB_FLGS, 0, 0, 0)
VSX_MADD(xvmaddasp, 4, float32, f32, MADD_FLGS, 1, 0, 0)
VSX_MADD(xvmaddmsp, 4, float32, f32, MADD_FLGS, 0, 0, 0)
VSX_MADD(xvmsubasp, 4, float32, f32, MSUB_FLGS, 1, 0, 0)
VSX_MADD(xvmsubmsp, 4, float32, f32, MSUB_FLGS, 0, 0, 0)
VSX_MADD(xvnmaddasp, 4, float32, f32, NMADD_FLGS, 1, 0, 0)
VSX_MADD(xvnmaddmsp, 4, float32, f32, NMADD_FLGS, 0, 0, 0)
VSX_MADD(xvnmsubasp, 4, float32, f32, NMSUB_FLGS, 1, 0, 0)
VSX_MADD(xvnmsubmsp, 4, float32, f32, NMSUB_FLGS, 0, 0, 0)
VSX_MADD(xvmaddasp, 4, float32, VsrW(i), MADD_FLGS, 1, 0, 0)
VSX_MADD(xvmaddmsp, 4, float32, VsrW(i), MADD_FLGS, 0, 0, 0)
VSX_MADD(xvmsubasp, 4, float32, VsrW(i), MSUB_FLGS, 1, 0, 0)
VSX_MADD(xvmsubmsp, 4, float32, VsrW(i), MSUB_FLGS, 0, 0, 0)
VSX_MADD(xvnmaddasp, 4, float32, VsrW(i), NMADD_FLGS, 1, 0, 0)
VSX_MADD(xvnmaddmsp, 4, float32, VsrW(i), NMADD_FLGS, 0, 0, 0)
VSX_MADD(xvnmsubasp, 4, float32, VsrW(i), NMSUB_FLGS, 1, 0, 0)
VSX_MADD(xvnmsubmsp, 4, float32, VsrW(i), NMSUB_FLGS, 0, 0, 0)
#define VSX_SCALAR_CMP(op, ordered) \
void helper_##op(CPUPPCState *env, uint32_t opcode) \
@@ -2352,10 +2360,10 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xA(opcode), &xa, env); \
getVSR(xB(opcode), &xb, env); \
\
if (unlikely(float64_is_any_nan(xa.f64[0]) || \
float64_is_any_nan(xb.f64[0]))) { \
if (float64_is_signaling_nan(xa.f64[0]) || \
float64_is_signaling_nan(xb.f64[0])) { \
if (unlikely(float64_is_any_nan(xa.VsrD(0)) || \
float64_is_any_nan(xb.VsrD(0)))) { \
if (float64_is_signaling_nan(xa.VsrD(0)) || \
float64_is_signaling_nan(xb.VsrD(0))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
} \
if (ordered) { \
@@ -2363,9 +2371,10 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
} \
cc = 1; \
} else { \
if (float64_lt(xa.f64[0], xb.f64[0], &env->fp_status)) { \
if (float64_lt(xa.VsrD(0), xb.VsrD(0), &env->fp_status)) { \
cc = 8; \
} else if (!float64_le(xa.f64[0], xb.f64[0], &env->fp_status)) { \
} else if (!float64_le(xa.VsrD(0), xb.VsrD(0), \
&env->fp_status)) { \
cc = 4; \
} else { \
cc = 2; \
@@ -2390,7 +2399,7 @@ VSX_SCALAR_CMP(xscmpudp, 0)
* op - operation (max or min)
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
*/
#define VSX_MAX_MIN(name, op, nels, tp, fld) \
void helper_##name(CPUPPCState *env, uint32_t opcode) \
@@ -2403,9 +2412,9 @@ void helper_##name(CPUPPCState *env, uint32_t opcode) \
getVSR(xT(opcode), &xt, env); \
\
for (i = 0; i < nels; i++) { \
xt.fld[i] = tp##_##op(xa.fld[i], xb.fld[i], &env->fp_status); \
if (unlikely(tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(xb.fld[i]))) { \
xt.fld = tp##_##op(xa.fld, xb.fld, &env->fp_status); \
if (unlikely(tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(xb.fld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
} \
} \
@@ -2414,18 +2423,18 @@ void helper_##name(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_MAX_MIN(xsmaxdp, maxnum, 1, float64, f64)
VSX_MAX_MIN(xvmaxdp, maxnum, 2, float64, f64)
VSX_MAX_MIN(xvmaxsp, maxnum, 4, float32, f32)
VSX_MAX_MIN(xsmindp, minnum, 1, float64, f64)
VSX_MAX_MIN(xvmindp, minnum, 2, float64, f64)
VSX_MAX_MIN(xvminsp, minnum, 4, float32, f32)
VSX_MAX_MIN(xsmaxdp, maxnum, 1, float64, VsrD(0))
VSX_MAX_MIN(xvmaxdp, maxnum, 2, float64, VsrD(i))
VSX_MAX_MIN(xvmaxsp, maxnum, 4, float32, VsrW(i))
VSX_MAX_MIN(xsmindp, minnum, 1, float64, VsrD(0))
VSX_MAX_MIN(xvmindp, minnum, 2, float64, VsrD(i))
VSX_MAX_MIN(xvminsp, minnum, 4, float32, VsrW(i))
/* VSX_CMP - VSX floating point compare
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* cmp - comparison operation
* svxvc - set VXVC bit
*/
@@ -2442,23 +2451,23 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xT(opcode), &xt, env); \
\
for (i = 0; i < nels; i++) { \
if (unlikely(tp##_is_any_nan(xa.fld[i]) || \
tp##_is_any_nan(xb.fld[i]))) { \
if (tp##_is_signaling_nan(xa.fld[i]) || \
tp##_is_signaling_nan(xb.fld[i])) { \
if (unlikely(tp##_is_any_nan(xa.fld) || \
tp##_is_any_nan(xb.fld))) { \
if (tp##_is_signaling_nan(xa.fld) || \
tp##_is_signaling_nan(xb.fld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
} \
if (svxvc) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXVC, 0); \
} \
xt.fld[i] = 0; \
xt.fld = 0; \
all_true = 0; \
} else { \
if (tp##_##cmp(xb.fld[i], xa.fld[i], &env->fp_status) == 1) { \
xt.fld[i] = -1; \
if (tp##_##cmp(xb.fld, xa.fld, &env->fp_status) == 1) { \
xt.fld = -1; \
all_false = 0; \
} else { \
xt.fld[i] = 0; \
xt.fld = 0; \
all_true = 0; \
} \
} \
@@ -2471,18 +2480,12 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_CMP(xvcmpeqdp, 2, float64, f64, eq, 0)
VSX_CMP(xvcmpgedp, 2, float64, f64, le, 1)
VSX_CMP(xvcmpgtdp, 2, float64, f64, lt, 1)
VSX_CMP(xvcmpeqsp, 4, float32, f32, eq, 0)
VSX_CMP(xvcmpgesp, 4, float32, f32, le, 1)
VSX_CMP(xvcmpgtsp, 4, float32, f32, lt, 1)
#if defined(HOST_WORDS_BIGENDIAN)
#define JOFFSET 0
#else
#define JOFFSET 1
#endif
VSX_CMP(xvcmpeqdp, 2, float64, VsrD(i), eq, 0)
VSX_CMP(xvcmpgedp, 2, float64, VsrD(i), le, 1)
VSX_CMP(xvcmpgtdp, 2, float64, VsrD(i), lt, 1)
VSX_CMP(xvcmpeqsp, 4, float32, VsrW(i), eq, 0)
VSX_CMP(xvcmpgesp, 4, float32, VsrW(i), le, 1)
VSX_CMP(xvcmpgtsp, 4, float32, VsrW(i), lt, 1)
/* VSX_CVT_FP_TO_FP - VSX floating point/floating point conversion
* op - instruction mnemonic
@@ -2503,7 +2506,6 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xT(opcode), &xt, env); \
\
for (i = 0; i < nels; i++) { \
int j = 2*i + JOFFSET; \
xt.tfld = stp##_to_##ttp(xb.sfld, &env->fp_status); \
if (unlikely(stp##_is_signaling_nan(xb.sfld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
@@ -2519,10 +2521,10 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_CVT_FP_TO_FP(xscvdpsp, 1, float64, float32, f64[i], f32[j], 1)
VSX_CVT_FP_TO_FP(xscvspdp, 1, float32, float64, f32[j], f64[i], 1)
VSX_CVT_FP_TO_FP(xvcvdpsp, 2, float64, float32, f64[i], f32[j], 0)
VSX_CVT_FP_TO_FP(xvcvspdp, 2, float32, float64, f32[j], f64[i], 0)
VSX_CVT_FP_TO_FP(xscvdpsp, 1, float64, float32, VsrD(0), VsrW(0), 1)
VSX_CVT_FP_TO_FP(xscvspdp, 1, float32, float64, VsrW(0), VsrD(0), 1)
VSX_CVT_FP_TO_FP(xvcvdpsp, 2, float64, float32, VsrD(i), VsrW(2*i), 0)
VSX_CVT_FP_TO_FP(xvcvspdp, 2, float32, float64, VsrW(2*i), VsrD(i), 0)
uint64_t helper_xscvdpspn(CPUPPCState *env, uint64_t xb)
{
@@ -2547,10 +2549,9 @@ uint64_t helper_xscvspdpn(CPUPPCState *env, uint64_t xb)
* ttp - target type (int32, uint32, int64 or uint64)
* sfld - source vsr_t field
* tfld - target vsr_t field
* jdef - definition of the j index (i or 2*i)
* rnan - resulting NaN
*/
#define VSX_CVT_FP_TO_INT(op, nels, stp, ttp, sfld, tfld, jdef, rnan) \
#define VSX_CVT_FP_TO_INT(op, nels, stp, ttp, sfld, tfld, rnan) \
void helper_##op(CPUPPCState *env, uint32_t opcode) \
{ \
ppc_vsr_t xt, xb; \
@@ -2560,7 +2561,6 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xT(opcode), &xt, env); \
\
for (i = 0; i < nels; i++) { \
int j = jdef; \
if (unlikely(stp##_is_any_nan(xb.sfld))) { \
if (stp##_is_signaling_nan(xb.sfld)) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
@@ -2568,7 +2568,8 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXCVI, 0); \
xt.tfld = rnan; \
} else { \
xt.tfld = stp##_to_##ttp(xb.sfld, &env->fp_status); \
xt.tfld = stp##_to_##ttp##_round_to_zero(xb.sfld, \
&env->fp_status); \
if (env->fp_status.float_exception_flags & float_flag_invalid) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXCVI, 0); \
} \
@@ -2579,27 +2580,23 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_CVT_FP_TO_INT(xscvdpsxds, 1, float64, int64, f64[j], u64[i], i, \
VSX_CVT_FP_TO_INT(xscvdpsxds, 1, float64, int64, VsrD(0), VsrD(0), \
0x8000000000000000ULL)
VSX_CVT_FP_TO_INT(xscvdpsxws, 1, float64, int32, f64[i], u32[j], \
2*i + JOFFSET, 0x80000000U)
VSX_CVT_FP_TO_INT(xscvdpuxds, 1, float64, uint64, f64[j], u64[i], i, 0ULL)
VSX_CVT_FP_TO_INT(xscvdpuxws, 1, float64, uint32, f64[i], u32[j], \
2*i + JOFFSET, 0U)
VSX_CVT_FP_TO_INT(xvcvdpsxds, 2, float64, int64, f64[j], u64[i], i, \
0x8000000000000000ULL)
VSX_CVT_FP_TO_INT(xvcvdpsxws, 2, float64, int32, f64[i], u32[j], \
2*i + JOFFSET, 0x80000000U)
VSX_CVT_FP_TO_INT(xvcvdpuxds, 2, float64, uint64, f64[j], u64[i], i, 0ULL)
VSX_CVT_FP_TO_INT(xvcvdpuxws, 2, float64, uint32, f64[i], u32[j], \
2*i + JOFFSET, 0U)
VSX_CVT_FP_TO_INT(xvcvspsxds, 2, float32, int64, f32[j], u64[i], \
2*i + JOFFSET, 0x8000000000000000ULL)
VSX_CVT_FP_TO_INT(xvcvspsxws, 4, float32, int32, f32[j], u32[j], i, \
VSX_CVT_FP_TO_INT(xscvdpsxws, 1, float64, int32, VsrD(0), VsrW(1), \
0x80000000U)
VSX_CVT_FP_TO_INT(xvcvspuxds, 2, float32, uint64, f32[j], u64[i], \
2*i + JOFFSET, 0ULL)
VSX_CVT_FP_TO_INT(xvcvspuxws, 4, float32, uint32, f32[j], u32[i], i, 0U)
VSX_CVT_FP_TO_INT(xscvdpuxds, 1, float64, uint64, VsrD(0), VsrD(0), 0ULL)
VSX_CVT_FP_TO_INT(xscvdpuxws, 1, float64, uint32, VsrD(0), VsrW(1), 0U)
VSX_CVT_FP_TO_INT(xvcvdpsxds, 2, float64, int64, VsrD(i), VsrD(i), \
0x8000000000000000ULL)
VSX_CVT_FP_TO_INT(xvcvdpsxws, 2, float64, int32, VsrD(i), VsrW(2*i), \
0x80000000U)
VSX_CVT_FP_TO_INT(xvcvdpuxds, 2, float64, uint64, VsrD(i), VsrD(i), 0ULL)
VSX_CVT_FP_TO_INT(xvcvdpuxws, 2, float64, uint32, VsrD(i), VsrW(2*i), 0U)
VSX_CVT_FP_TO_INT(xvcvspsxds, 2, float32, int64, VsrW(2*i), VsrD(i), \
0x8000000000000000ULL)
VSX_CVT_FP_TO_INT(xvcvspsxws, 4, float32, int32, VsrW(i), VsrW(i), 0x80000000U)
VSX_CVT_FP_TO_INT(xvcvspuxds, 2, float32, uint64, VsrW(2*i), VsrD(i), 0ULL)
VSX_CVT_FP_TO_INT(xvcvspuxws, 4, float32, uint32, VsrW(i), VsrW(i), 0U)
/* VSX_CVT_INT_TO_FP - VSX integer to floating point conversion
* op - instruction mnemonic
@@ -2611,7 +2608,7 @@ VSX_CVT_FP_TO_INT(xvcvspuxws, 4, float32, uint32, f32[j], u32[i], i, 0U)
* jdef - definition of the j index (i or 2*i)
* sfprf - set FPRF
*/
#define VSX_CVT_INT_TO_FP(op, nels, stp, ttp, sfld, tfld, jdef, sfprf, r2sp) \
#define VSX_CVT_INT_TO_FP(op, nels, stp, ttp, sfld, tfld, sfprf, r2sp) \
void helper_##op(CPUPPCState *env, uint32_t opcode) \
{ \
ppc_vsr_t xt, xb; \
@@ -2621,7 +2618,6 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
getVSR(xT(opcode), &xt, env); \
\
for (i = 0; i < nels; i++) { \
int j = jdef; \
xt.tfld = stp##_to_##ttp(xb.sfld, &env->fp_status); \
if (r2sp) { \
xt.tfld = helper_frsp(env, xt.tfld); \
@@ -2635,22 +2631,18 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_CVT_INT_TO_FP(xscvsxddp, 1, int64, float64, u64[j], f64[i], i, 1, 0)
VSX_CVT_INT_TO_FP(xscvuxddp, 1, uint64, float64, u64[j], f64[i], i, 1, 0)
VSX_CVT_INT_TO_FP(xscvsxdsp, 1, int64, float64, u64[j], f64[i], i, 1, 1)
VSX_CVT_INT_TO_FP(xscvuxdsp, 1, uint64, float64, u64[j], f64[i], i, 1, 1)
VSX_CVT_INT_TO_FP(xvcvsxddp, 2, int64, float64, u64[j], f64[i], i, 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxddp, 2, uint64, float64, u64[j], f64[i], i, 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxwdp, 2, int32, float64, u32[j], f64[i], \
2*i + JOFFSET, 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxwdp, 2, uint64, float64, u32[j], f64[i], \
2*i + JOFFSET, 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxdsp, 2, int64, float32, u64[i], f32[j], \
2*i + JOFFSET, 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxdsp, 2, uint64, float32, u64[i], f32[j], \
2*i + JOFFSET, 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxwsp, 4, int32, float32, u32[j], f32[i], i, 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxwsp, 4, uint32, float32, u32[j], f32[i], i, 0, 0)
VSX_CVT_INT_TO_FP(xscvsxddp, 1, int64, float64, VsrD(0), VsrD(0), 1, 0)
VSX_CVT_INT_TO_FP(xscvuxddp, 1, uint64, float64, VsrD(0), VsrD(0), 1, 0)
VSX_CVT_INT_TO_FP(xscvsxdsp, 1, int64, float64, VsrD(0), VsrD(0), 1, 1)
VSX_CVT_INT_TO_FP(xscvuxdsp, 1, uint64, float64, VsrD(0), VsrD(0), 1, 1)
VSX_CVT_INT_TO_FP(xvcvsxddp, 2, int64, float64, VsrD(i), VsrD(i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxddp, 2, uint64, float64, VsrD(i), VsrD(i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxwdp, 2, int32, float64, VsrW(2*i), VsrD(i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxwdp, 2, uint64, float64, VsrW(2*i), VsrD(i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxdsp, 2, int64, float32, VsrD(i), VsrW(2*i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxdsp, 2, uint64, float32, VsrD(i), VsrW(2*i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvsxwsp, 4, int32, float32, VsrW(i), VsrW(i), 0, 0)
VSX_CVT_INT_TO_FP(xvcvuxwsp, 4, uint32, float32, VsrW(i), VsrW(i), 0, 0)
/* For "use current rounding mode", define a value that will not be one of
* the existing rounding model enums.
@@ -2662,7 +2654,7 @@ VSX_CVT_INT_TO_FP(xvcvuxwsp, 4, uint32, float32, u32[j], f32[i], i, 0, 0)
* op - instruction mnemonic
* nels - number of elements (1, 2 or 4)
* tp - type (float32 or float64)
* fld - vsr_t field (f32 or f64)
* fld - vsr_t field (VsrD(*) or VsrW(*))
* rmode - rounding mode
* sfprf - set FPRF
*/
@@ -2679,14 +2671,14 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
} \
\
for (i = 0; i < nels; i++) { \
if (unlikely(tp##_is_signaling_nan(xb.fld[i]))) { \
if (unlikely(tp##_is_signaling_nan(xb.fld))) { \
fload_invalid_op_excp(env, POWERPC_EXCP_FP_VXSNAN, 0); \
xt.fld[i] = tp##_snan_to_qnan(xb.fld[i]); \
xt.fld = tp##_snan_to_qnan(xb.fld); \
} else { \
xt.fld[i] = tp##_round_to_int(xb.fld[i], &env->fp_status); \
xt.fld = tp##_round_to_int(xb.fld, &env->fp_status); \
} \
if (sfprf) { \
helper_compute_fprf(env, xt.fld[i], sfprf); \
helper_compute_fprf(env, xt.fld, sfprf); \
} \
} \
\
@@ -2702,23 +2694,23 @@ void helper_##op(CPUPPCState *env, uint32_t opcode) \
helper_float_check_status(env); \
}
VSX_ROUND(xsrdpi, 1, float64, f64, float_round_nearest_even, 1)
VSX_ROUND(xsrdpic, 1, float64, f64, FLOAT_ROUND_CURRENT, 1)
VSX_ROUND(xsrdpim, 1, float64, f64, float_round_down, 1)
VSX_ROUND(xsrdpip, 1, float64, f64, float_round_up, 1)
VSX_ROUND(xsrdpiz, 1, float64, f64, float_round_to_zero, 1)
VSX_ROUND(xsrdpi, 1, float64, VsrD(0), float_round_nearest_even, 1)
VSX_ROUND(xsrdpic, 1, float64, VsrD(0), FLOAT_ROUND_CURRENT, 1)
VSX_ROUND(xsrdpim, 1, float64, VsrD(0), float_round_down, 1)
VSX_ROUND(xsrdpip, 1, float64, VsrD(0), float_round_up, 1)
VSX_ROUND(xsrdpiz, 1, float64, VsrD(0), float_round_to_zero, 1)
VSX_ROUND(xvrdpi, 2, float64, f64, float_round_nearest_even, 0)
VSX_ROUND(xvrdpic, 2, float64, f64, FLOAT_ROUND_CURRENT, 0)
VSX_ROUND(xvrdpim, 2, float64, f64, float_round_down, 0)
VSX_ROUND(xvrdpip, 2, float64, f64, float_round_up, 0)
VSX_ROUND(xvrdpiz, 2, float64, f64, float_round_to_zero, 0)
VSX_ROUND(xvrdpi, 2, float64, VsrD(i), float_round_nearest_even, 0)
VSX_ROUND(xvrdpic, 2, float64, VsrD(i), FLOAT_ROUND_CURRENT, 0)
VSX_ROUND(xvrdpim, 2, float64, VsrD(i), float_round_down, 0)
VSX_ROUND(xvrdpip, 2, float64, VsrD(i), float_round_up, 0)
VSX_ROUND(xvrdpiz, 2, float64, VsrD(i), float_round_to_zero, 0)
VSX_ROUND(xvrspi, 4, float32, f32, float_round_nearest_even, 0)
VSX_ROUND(xvrspic, 4, float32, f32, FLOAT_ROUND_CURRENT, 0)
VSX_ROUND(xvrspim, 4, float32, f32, float_round_down, 0)
VSX_ROUND(xvrspip, 4, float32, f32, float_round_up, 0)
VSX_ROUND(xvrspiz, 4, float32, f32, float_round_to_zero, 0)
VSX_ROUND(xvrspi, 4, float32, VsrW(i), float_round_nearest_even, 0)
VSX_ROUND(xvrspic, 4, float32, VsrW(i), FLOAT_ROUND_CURRENT, 0)
VSX_ROUND(xvrspim, 4, float32, VsrW(i), float_round_down, 0)
VSX_ROUND(xvrspip, 4, float32, VsrW(i), float_round_up, 0)
VSX_ROUND(xvrspiz, 4, float32, VsrW(i), float_round_to_zero, 0)
uint64_t helper_xsrsp(CPUPPCState *env, uint64_t xb)
{

View File

@@ -101,7 +101,7 @@ static inline int hreg_store_msr(CPUPPCState *env, target_ulong value,
hreg_compute_hflags(env);
#if !defined(CONFIG_USER_ONLY)
if (unlikely(msr_pow == 1)) {
if ((*env->check_pow)(env)) {
if (!env->pending_interrupts && (*env->check_pow)(env)) {
cs->halted = 1;
excp = EXCP_HALTED;
}

View File

@@ -6699,6 +6699,8 @@ POWERPC_FAMILY(970)(ObjectClass *oc, void *data)
pcc->flags = POWERPC_FLAG_VRE | POWERPC_FLAG_SE |
POWERPC_FLAG_BE | POWERPC_FLAG_PMM |
POWERPC_FLAG_BUS_CLK;
pcc->l1_dcache_size = 0x8000;
pcc->l1_icache_size = 0x10000;
}
static int check_pow_970FX (CPUPPCState *env)
@@ -6791,6 +6793,8 @@ POWERPC_FAMILY(970FX)(ObjectClass *oc, void *data)
pcc->flags = POWERPC_FLAG_VRE | POWERPC_FLAG_SE |
POWERPC_FLAG_BE | POWERPC_FLAG_PMM |
POWERPC_FLAG_BUS_CLK;
pcc->l1_dcache_size = 0x8000;
pcc->l1_icache_size = 0x10000;
}
static int check_pow_970MP (CPUPPCState *env)
@@ -6877,6 +6881,8 @@ POWERPC_FAMILY(970MP)(ObjectClass *oc, void *data)
pcc->flags = POWERPC_FLAG_VRE | POWERPC_FLAG_SE |
POWERPC_FLAG_BE | POWERPC_FLAG_PMM |
POWERPC_FLAG_BUS_CLK;
pcc->l1_dcache_size = 0x8000;
pcc->l1_icache_size = 0x10000;
}
static void init_proc_power5plus(CPUPPCState *env)
@@ -6967,6 +6973,8 @@ POWERPC_FAMILY(POWER5P)(ObjectClass *oc, void *data)
pcc->flags = POWERPC_FLAG_VRE | POWERPC_FLAG_SE |
POWERPC_FLAG_BE | POWERPC_FLAG_PMM |
POWERPC_FLAG_BUS_CLK;
pcc->l1_dcache_size = 0x8000;
pcc->l1_icache_size = 0x10000;
}
static void init_proc_POWER7 (CPUPPCState *env)

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -69,10 +69,14 @@ _use_sample_img empty.bochs.bz2
poke_file "$TEST_IMG" "$disk_size_offset" "\x00\xc0\x0f\x00\x00\x00\x00\x7f"
{ $QEMU_IO -c "read 2T 4k" $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
_use_sample_img empty.bochs.bz2
poke_file "$TEST_IMG" "$catalog_size_offset" "\x10\x00\x00\x00"
{ $QEMU_IO -c "read 0xfbe00 512" $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
echo
echo "== Negative extent size =="
_use_sample_img empty.bochs.bz2
poke_file "$TEST_IMG" "$extent_size_offset" "\xff\xff\xff\xff"
poke_file "$TEST_IMG" "$extent_size_offset" "\x00\x00\x00\x80"
{ $QEMU_IO -c "read 768k 512" $TEST_IMG; } 2>&1 | _filter_qemu_io | _filter_testdir
echo

View File

@@ -15,12 +15,14 @@ no file open, try 'help open'
== Too small catalog bitmap for image size ==
qemu-io: can't open device TEST_DIR/empty.bochs: Catalog size is too small for this disk size
no file open, try 'help open'
qemu-io: can't open device TEST_DIR/empty.bochs: Catalog size is too small for this disk size
no file open, try 'help open'
== Negative extent size ==
qemu-io: can't open device TEST_DIR/empty.bochs: Extent size 4294967295 is too large
qemu-io: can't open device TEST_DIR/empty.bochs: Extent size 2147483648 is too large
no file open, try 'help open'
== Zero extent size ==
qemu-io: can't open device TEST_DIR/empty.bochs: Extent size may not be zero
qemu-io: can't open device TEST_DIR/empty.bochs: Extent size must be at least 512
no file open, try 'help open'
*** done

View File

@@ -278,7 +278,7 @@ static void sdl_hide_cursor(void)
SDL_ShowCursor(1);
SDL_SetCursor(sdl_cursor_hidden);
} else {
SDL_ShowCursor(0);
SDL_SetRelativeMouseMode(SDL_TRUE);
}
}
@@ -289,6 +289,7 @@ static void sdl_show_cursor(void)
}
if (!qemu_input_is_absolute()) {
SDL_SetRelativeMouseMode(SDL_FALSE);
SDL_ShowCursor(1);
if (guest_cursor &&
(gui_grab || qemu_input_is_absolute() || absolute_enabled)) {
@@ -403,13 +404,17 @@ static void sdl_send_mouse_event(struct sdl2_state *scon, int dx, int dy,
}
qemu_input_queue_abs(scon->dcl.con, INPUT_AXIS_X, off_x + x, max_w);
qemu_input_queue_abs(scon->dcl.con, INPUT_AXIS_Y, off_y + y, max_h);
} else if (guest_cursor) {
x -= guest_x;
y -= guest_y;
guest_x += x;
guest_y += y;
qemu_input_queue_rel(scon->dcl.con, INPUT_AXIS_X, x);
qemu_input_queue_rel(scon->dcl.con, INPUT_AXIS_Y, y);
} else {
if (guest_cursor) {
x -= guest_x;
y -= guest_y;
guest_x += x;
guest_y += y;
dx = x;
dy = y;
}
qemu_input_queue_rel(scon->dcl.con, INPUT_AXIS_X, dx);
qemu_input_queue_rel(scon->dcl.con, INPUT_AXIS_Y, dy);
}
qemu_input_event_sync();
}