Compare commits

..

74 Commits

Author SHA1 Message Date
Michael Tokarev
712271e21a Revert "serial: fix retry logic"
This reverts commit 67c5322d70:

    I'm not sure if the retry logic has ever worked when not using FIFO mode.  I
    found this while writing a test case although code inspection confirms it is
    definitely broken.

    The TSR retry logic will never actually happen because it is guarded by an
    'if (s->tsr_rety > 0)' but this is the only place that can ever make the
    variable greater than zero.  That effectively makes the retry logic an 'if (0)

    I believe this is a typo and the intention was >= 0.  Once this is fixed thoug
    I see double transmits with my test case.  This is because in the non FIFO
    case, serial_xmit may get invoked while LSR.THRE is still high because the
    character was processed but the retransmit timer was still active.

    We can handle this by simply checking for LSR.THRE and returning early.  It's
    possible that the FIFO paths also need some attention.

    Cc: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
    Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>

Even if the previous logic was never worked, new logic breaks stuff -
namely,

 qemu -enable-kvm -nographic -kernel /boot/vmlinuz-$(uname -r) -append console=ttyS0 -serial pty

the above command will cause the virtual machine to stuck at startup
using 100% CPU till one connects to the pty and sends any char to it.

Note this is rather typical invocation for various headless virtual
machines by libvirt.

So revert this change for now, till a better solution will be found.

Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit b37a2e4576)
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-01-24 16:10:09 +01:00
Asias He
7a423c1afd scsi: Allocate SCSITargetReq r->buf dynamically [CVE-2013-4344]
r->buf is hardcoded to 2056 which is (256 + 1) * 8, allowing 256 luns at
most. If more than 256 luns are specified by user, we have buffer
overflow in scsi_target_emulate_report_luns.

To fix, we allocate the buffer dynamically.

Signed-off-by: Asias He <asias@redhat.com>
Tested-by: Michael Roth <mdroth@linux.vnet.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 846424350b)
[AF: Backported]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2014-01-24 15:54:09 +01:00
Laszlo Ersek
2b59d86cc1 qga: unlink just created guest-file if fchmod() or fdopen() fails on it
We shouldn't allow guest filesystem pollution on error paths.

Suggested-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 2b72001806)

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:59:46 +02:00
Laszlo Ersek
7a0fbb2149 qga: distinguish binary modes in "guest_file_open_modes" map
In Windows guests this may make a difference.

Since the original patch (commit c689b4f1) sought to be pedantic and to
consider theoretical corner cases of portability, we should fix it up
where it failed to come through in that pursuit.

Suggested-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Reviewed-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
(cherry picked from commit 8fe6bbca71)

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:59:46 +02:00
Laszlo Ersek
19c04c8cf5 qga: set umask 0077 when daemonizing (CVE-2013-2007)
The qemu guest agent creates a bunch of files with insecure permissions
when started in daemon mode. For example:

  -rw-rw-rw- 1 root root /var/log/qemu-ga.log
  -rw-rw-rw- 1 root root /var/run/qga.state
  -rw-rw-rw- 1 root root /var/log/qga-fsfreeze-hook.log

In addition, at least all files created with the "guest-file-open" QMP
command, and all files created with shell output redirection (or
otherwise) by utilities invoked by the fsfreeze hook script are affected.

For now mask all file mode bits for "group" and "others" in
become_daemon().

Temporarily, for compatibility reasons, stick with the 0666 file-mode in
case of files newly created by the "guest-file-open" QMP call. Do so
without changing the umask temporarily.

Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit c689b4f1ba)

Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:59:28 +02:00
Alexander Graf
6ad8ca6765 linux-user: lseek: explicitly cast end offsets to signed
When doing lseek, SEEK_END indicates that the offset is a signed variable
that is usually negative, while and other SEEK indicates that it's unsigned.

When converting from 32bit to 64bit parameters, we need to take this into
account and enable SEEK_END to be negative, while other SEEKs usually indicate
absolute position which we need to maintain as unsigned.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:22 +02:00
Alexander Graf
9503d039b2 Make char muxer more robust wrt small FIFOs
Virtio-Console can only process one character at a time. Using it on S390
gave me strage "lags" where I got the character I pressed before when
pressing one. So I typed in "abc" and only received "a", then pressed "d"
but the guest received "b" and so on.

While the stdio driver calls a poll function that just processes on its
queue in case virtio-console can't take multiple characters at once, the
muxer does not have such callbacks, so it can't empty its queue.

To work around that limitation, I introduced a new timer that only gets
active when the guest can not receive any more characters. In that case
it polls again after a while to check if the guest is now receiving input.

This patch fixes input when using -nographic on s390 for me.
2013-05-26 15:04:22 +02:00
Alexander Graf
41641ce4e5 console: add question-mark escape operator
Some termcaps (found using SLES11SP1) use [? sequences. According to man
console_codes (http://linux.die.net/man/4/console_codes) the question mark
is a nop and should simply be ignored.

This patch does exactly that, rendering screen output readable when
outputting guest serial consoles to the graphical console emulator.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:22 +02:00
Alexander Graf
6897c46882 Legacy Patch kvm-qemu-preXX-report-default-mac-used.patch 2013-05-26 15:04:22 +02:00
Alexander Graf
c144bbfb2e Legacy Patch kvm-qemu-preXX-dictzip3.patch 2013-05-26 15:04:22 +02:00
Alexander Graf
55b40bcdc4 Add tar container format
Tar is a very widely used format to store data in. Sometimes people even put
virtual machine images in there.

So it makes sense for qemu to be able to read from tar files. I implemented a
written from scratch reader that also knows about the GNU sparse format, which
is what pigz creates.

This version checks for filenames that end on well-known extensions. The logic
could be changed to search for filenames given on the command line, but that
would require changes to more parts of qemu.

The tar reader in conjunctiuon with dzip gives us the chance to download
tar'ed up virtual machine images (even via http) and instantly make use of
them.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Bruce Rogers <brogers@novell.com>
2013-05-26 15:04:22 +02:00
Alexander Graf
399781985e Add support for DictZip enabled gzip files
DictZip is an extension to the gzip format that allows random seeks in gzip
compressed files by cutting the file into pieces and storing the piece offsets
in the "extra" header of the gzip format.

Thanks to that extension, we can use gzip compressed files as block backend,
though only in read mode.

This makes a lot of sense when stacked with tar files that can then be shipped
to VM users. If a VM image is inside a tar file that is inside a DictZip
enabled gzip file, the user can run the tar.gz file as is without having to
extract the image first.

Tar patch follows.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Bruce Rogers <brogers@novell.com>
2013-05-26 15:04:21 +02:00
Alexander Graf
21ec069779 linux-user: use target_ulong
Linux syscalls pass pointers or data length or other information of that sort
to the kernel. This is all stuff you don't want to have sign extended.
Otherwise a host 64bit variable parameter with a size parameter will extend
it to a negative number, breaking lseek for example.

Pass syscall arguments as ulong always.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:21 +02:00
Alexander Graf
5c50df3458 linux-user: add more blk ioctls
Implement a few more ioctls that operate on block devices.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:21 +02:00
Andreas Färber
be603e808e vnc: password-file= and incoming-connections=
TBD (from SUSE Studio team)
2013-05-26 15:04:21 +02:00
Andreas Färber
ba212d0628 slirp: -nooutgoing
TBD (from SUSE Studio team)
2013-05-26 15:04:21 +02:00
Alexander Graf
fb44bccb2b linux-user: XXX disable fiemap
agraf: fiemap breaks in libarchive. Disable it for now.
2013-05-26 15:04:21 +02:00
Alexander Graf
e2259d5793 linux-user: implement FS_IOC_SETFLAGS ioctl
Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2

  - use TYPE_LONG instead of TYPE_INT
2013-05-26 15:04:21 +02:00
Alexander Graf
da3c3b83e0 linux-user: implement FS_IOC_GETFLAGS ioctl
Signed-off-by: Alexander Graf <agraf@suse.de>

---

v1 -> v2:

  - use TYPE_LONG instead of TYPE_INT
2013-05-26 15:04:21 +02:00
Alexander Graf
890e0b2cf5 linux-user: Fake /proc/cpuinfo
Fedora 17 for ARM reads /proc/cpuinfo and fails if it doesn't contain
ARM related contents. This patch implements a quick hack to expose real
/proc/cpuinfo data taken from a real world machine.

The real fix would be to generate at least the flags automatically based
on the selected CPU. Please do not submit this patch upstream until this
has happened.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:21 +02:00
Alexander Graf
849a5d15f2 linux-user: lock tb flushing too
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:20 +02:00
Alexander Graf
2cf41ff75a linux-user: Run multi-threaded code on a single core
Running multi-threaded code can easily expose some of the fundamental
breakages in QEMU's design. It's just not a well supported scenario.

So if we pin the whole process to a single host CPU, we guarantee that
we will never have concurrent memory access actually happen. We can still
get scheduled away at any time, so it's no complete guarantee, but apparently
it reduces the odds well enough to get my test cases to pass.

This gets Java 1.7 working for me again on my test box.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:20 +02:00
Alexander Graf
aa3dc18512 linux-user: lock tcg
The tcg code generator is not thread safe. Lock its generation between
different threads.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:20 +02:00
Alexander Graf
8137fc352d linux-user: fix segmentation fault passing with g2h(x) != x
When forwarding a segmentation fault into the guest process, we were passing
the host's address directly into the guest process's signal descriptor.

That obviously confused the guest process, since it didn't know what to make
of the (usually 32-bit truncated) address. Passing in g2h(address) makes the
guest process a lot happier.

This fixes java running in arm-linux-user for me.

Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased onto AREG0 fix for v1.2, squashed fixup by agraf]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:04:20 +02:00
Alexander Graf
bcef1d17e9 linux-user: Ignore broken loop ioctl
During invocations of losetup, we run into an ioctl that doesn't
exist. However, because of that we output an error, which then
screws up the kiwi logic around that call.

So let's silently ignore that bogus ioctl.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:20 +02:00
Alexander Graf
b34244cfd6 linux-user: arm: no tb_flush on reset
When running automoc4 as linux-user guest program, it segfaults right after
it creates a thread. Bisecting pointed to commit a84fac1426 which introduces
tb_flush on reset.

So something in our thread creation is broken. But for now, let's revert the
change to at least get a working build again.
2013-05-26 15:04:19 +02:00
Alexander Graf
941a2202e1 linux-user: binfmt: support host binaries
When we have a working host binary equivalent for the guest binary we're
trying to run, let's just use that instead as it will be a lot faster.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:19 +02:00
Alexander Graf
6129f046f7 linux-user: fix segfault deadlock
When entering the guest we take a lock to ensure that nobody else messes
with our TB chaining while we're doing it. If we get a segfault inside that
code, we manage to work on, but will not unlock the lock.

This patch forces unlocking of that lock in the segv handler. I'm not sure
this is the right approach though. Maybe we should rather make sure we don't
segfault in the code? I would greatly appreciate someone more intelligible
than me to look at this :).

Example code to trigger this is at: http://csgraf.de/tmp/conftest.c

Reported-by: Fabio Erculiani <lxnay@sabayon.org>
Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:19 +02:00
Alexander Graf
1b71cc7ca0 PPC: KVM: Disable mmu notifier check
When using hugetlbfs (which is required for HV mode KVM on 970), we
check for MMU notifiers that on 970 can not be implemented properly.

So disable the check for mmu notifiers on PowerPC guests, making
KVM guests work there, even if possibly racy in some odd circumstances.
2013-05-26 15:04:19 +02:00
Alexander Graf
d25cda59f2 linux-user: be silent about capget failures
Complaining about capget doesn't buy us anything, but makes %check
fail in certain builds. So better not complain about its missing
implementation and go on with life :)

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:19 +02:00
Alexander Graf
84e839136f linux-user: Ignore timer_create syscall
We don't implement the timer_create syscall, but shouting out loud
about it breaks some %check tests in OBS, so better ignore it silently.

Signed-off-by: Alexander Graf <agraf@suse.de>
2013-05-26 15:04:19 +02:00
Alexander Graf
1a53708166 linux-user: add binfmt wrapper for argv[0] handling
When using qemu's linux-user binaries through binfmt, argv[0] gets lost
along the execution because qemu only gets passed in the full file name
to the executable while argv[0] can be something completely different.

This breaks in some subtile situations, such as the grep and make test
suites.

This patch adds a wrapper binary called qemu-$TARGET-binfmt that can be
used with binfmt's P flag which passes the full path _and_ argv[0] to
the binfmt handler.

The binary would be smart enough to be versatile and only exist in the
system once, creating the qemu binary path names from its own argv[0].
However, this seemed like it didn't fit the make system too well, so
we're currently creating a new binary for each target archictecture.

CC: Reinhard Max <max@suse.de>
Signed-off-by: Alexander Graf <agraf@suse.de>
[AF: Rebased onto new Makefile infrastructure]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:04:19 +02:00
Ulrich Hecht
cc67cf446c configure: Enable mipsn32*-linux-user builds
Signed-off-by: Ulrich Hecht <uli@suse.de>
[AF: Merged default-configs upstream]
[AF: Merged with new or32-linux-user for v1.2]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:04:19 +02:00
Ulrich Hecht
8014305898 block/vmdk: Support creation of SCSI VMDK images in qemu-img
Signed-off-by: Ulrich Hecht <uli@suse.de>
[AF: Changed BLOCK_FLAG_SCSI from 8 to 16 for v1.2]
Signed-off-by: Andreas Färber <afaerber@suse.de>
2013-05-26 15:04:19 +02:00
Alexander Graf
a6b483c31c qemu-cvs-ioctl_nodirection
the direction given in the ioctl should be correct so we can assume the
communication is uni-directional. The alsa developers did not like this
concept though and declared ioctls IOC_R and IOC_W even though they were
IOC_RW.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Alexander Graf
6d52ff37b5 qemu-cvs-ioctl_debug
Extends unsupported ioctl debug output.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Ulrich Hecht
13795190e7 qemu-cvs-gettimeofday
No clue what this is for.
2013-05-26 15:04:18 +02:00
Alexander Graf
1125720b47 qemu-cvs-alsa_mmap
Hack to prevent ALSA from using mmap() interface to simplify emulation.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Alexander Graf
e00f3ed887 qemu-cvs-alsa_ioctl
Implements ALSA ioctls on PPC hosts.

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Alexander Graf
3f30407eb5 qemu-cvs-alsa_bitfield
Implements TYPE_INTBITFIELD partially. (required for ALSA support)

Signed-off-by: Alexander Graf <agraf@suse.de>
Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Ulrich Hecht
033bf37384 qemu-0.9.0.cvs-binfmt
Fixes binfmt_misc setup script:
- x86_64 is i386-compatible
- m68k signature fixed
- path to QEMU

Signed-off-by: Ulrich Hecht <uli@suse.de>
2013-05-26 15:04:18 +02:00
Alexander Graf
07ae122b6a XXX work around SA_RESTART race with boehm-gc (ARM only)
[AF: CPUState -> CPUArchState, adapt to reindentation]
2013-05-26 15:04:18 +02:00
Alexander Graf
6197cf31be XXX dont dump core on sigabort 2013-05-26 15:04:17 +02:00
Peter Maydell
665ce82fd8 Handle CPU interrupts by inline checking of a flag
Fix the nasty TCG race conditions and crashes by implementing cpu_exit
as setting a flag which is checked at the start of each TB. This is
slightly slower than the attempt to have cpu_exit alter the graph of
TBs, but it doesn't crash if a thread or signal handler calls cpu_exit
while the execution thread is itself modifying the TB graph.

This version of the patch includes command line option "-no-stopflag"
which reverts to the previous racy behaviour. This is intended for
convenience in testing and comparative benchmarking and won't be
in the final patch.

It's probably worth experimenting with whether the flag-testing
code has the branch in a sense which confuses branch-prediction
and thus whether flipping it might change performance.

Mostly this needs benchmarking to determine what the actual speed
hit is, which I never got round to. Feel free to do some :-)

[AF: CPUState -> CPUArchState]
2013-05-26 15:04:17 +02:00
Michael Roth
04024dea26 update VERSION for v1.3.1
Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-28 10:38:28 -06:00
Markus Armbruster
1bd4397e8d qxl: Fix SPICE_RING_PROD_ITEM(), SPICE_RING_CONS_ITEM() sanity check
The pointer arithmetic there is safe, but ugly.  Coverity grouses
about it.  However, the actual comparison is off by one: <= end
instead of < end.  Fix by rewriting the check in a cleaner way.

Signed-off-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit bc5f92e5db)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:44:31 -06:00
Sander Eikelenboom
e76672424d Fix compile errors when enabling Xen debug logging.
Signed-off-by: Sander Eikelenboom <linux@eikelenboom.it>
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit f1b8caf1d9)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:08:52 -06:00
Stefano Stabellini
df50a7e0cb xen: fix trivial PCI passthrough MSI-X bug
We are currently passing entry->data as address parameter. Pass
entry->addr instead.

Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
Tested-by: Sander Eikelenboom <linux@eikelenboom.it>
Xen-devel: http://marc.info/?l=xen-devel&m=135515462613715
(cherry picked from commit 044b99c655)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:07:21 -06:00
Roger Pau Monne
90c96d33c4 xen_disk: fix memory leak
On ioreq_release the full ioreq was memset to 0, loosing all the data
and memory allocations inside the QEMUIOVector, which leads to a
memory leak. Create a new function to specifically reset ioreq.

Reported-by: Maik Wessler <maik.wessler@yahoo.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
(cherry picked from commit 282c6a2f29)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 14:06:50 -06:00
Peter Maydell
4ee28799d4 tcg/target-arm: Add missing parens to assertions
Silence a (legitimate) complaint about missing parentheses:

tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_ld’:
tcg/arm/tcg-target.c:1148:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]
tcg/arm/tcg-target.c: In function ‘tcg_out_qemu_st’:
tcg/arm/tcg-target.c:1357:5: error: suggest parentheses around
comparison in operand of ‘&’ [-Werror=parentheses]

which meant that we would mistakenly always assert if running
a QEMU built with debug enabled on ARM.

Signed-off-by: Peter Maydell <peter.maydelL@linaro.org>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 5256a7208a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:52:23 -06:00
Kevin Wolf
563068a8b2 win32-aio: Fix memory leak
The buffer is allocated for both reads and writes, and obviously it
should be freed even if an error occurs.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit e8bccad5ac)

Conflicts:

	block/win32-aio.c

*addressed conflict due to buggy g_free() still in use instead of
qemu_vfree() as it is upstream (via commit 7479acdb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:43:10 -06:00
Kevin Wolf
cdb483457c win32-aio: Fix vectored reads
Copying data in the right direction really helps a lot!

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit bcbbd234d4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:24:00 -06:00
Kevin Wolf
9d173df553 aio: Fix return value of aio_poll()
aio_poll() must return true if any work is still pending, even if it
didn't make progress, so that bdrv_drain_all() doesn't stop waiting too
early. The possibility of stopping early occasionally lead to a failed
assertion in bdrv_drain_all(), when some in-flight request was missed
and the function didn't really drain all requests.

In order to make that change, the return value as specified in the
function comment must change for blocking = false; fortunately, the
return value of blocking = false callers is only used in test cases, so
this change shouldn't cause any trouble.

Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2ea9b58f0b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-21 13:23:52 -06:00
Paolo Bonzini
204dd38c2d raw-posix: fix bdrv_aio_ioctl
When the raw-posix aio=thread code was moved from posix-aio-compat.c
to block/raw-posix.c, there was an unintended change to the ioctl code.
The code used to return the ioctl command, which posix_aio_read()
would later morph into a zero.  This hack is not necessary anymore,
and in fact breaks scsi-generic (which expects a zero return code).
Remove it.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
(cherry picked from commit b608c8dc02)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:45 -06:00
Alex Williamson
86bab45948 vfio-pci: Loosen sanity checks to allow future features
VFIO_PCI_NUM_REGIONS and VFIO_PCI_NUM_IRQS should never have been
used in this manner as it locks a specific kernel implementation.
Future features may introduce new regions or interrupt entries
(VGA may add legacy ranges, AER might add an IRQ for error
signalling).  Fix this before it gets us into trouble.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit 8fc94e5a80)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:45 -06:00
Alex Williamson
006c747440 pci-assign: Enable MSIX on device to match guest
When a guest enables MSIX on a device we evaluate the MSIX vector
table, typically find no unmasked vectors and don't switch the device
to MSIX mode.  This generally works fine and the device will be
switched once the guest enables and therefore unmasks a vector.
Unfortunately some drivers enable MSIX, then use interfaces to send
commands between VF & PF or PF & firmware that act based on the host
state of the device.  These therefore may break when MSIX is managed
lazily.  This change re-enables the previous test used to enable MSIX
(see qemu-kvm a6b402c9), which basically guesses whether a vector
will be used based on the data field of the vector table.

Cc: qemu-stable@nongnu.org
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit feb9a2ab4b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:44 -06:00
Alex Williamson
f042cca009 vfio-pci: Make host MSI-X enable track guest
Guests typically enable MSI-X with all of the vectors in the MSI-X
vector table masked.  Only when the vector is enabled does the vector
get unmasked, resulting in a vector_use callback.  These two points,
enable and unmask, correspond to pci_enable_msix() and request_irq()
for Linux guests.  Some drivers rely on VF/PF or PF/fw communication
channels that expect the physical state of the device to match the
guest visible state of the device.  They don't appreciate lazily
enabling MSI-X on the physical device.

To solve this, enable MSI-X with a single vector when the MSI-X
capability is enabled and immediate disable the vector.  This leaves
the physical device in exactly the same state between host and guest.
Furthermore, the brief gap where we enable vector 0, it fires into
userspace, not KVM, so the guest doesn't get spurious interrupts.
Ideally we could call VFIO_DEVICE_SET_IRQS with the right parameters
to enable MSI-X with zero vectors, but this will currently return an
error as the Linux MSI-X interfaces do not allow it.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit b0223e29af)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:05:44 -06:00
Max Filippov
1205b8080f target-xtensa: fix search_pc for the last TB opcode
Zero out tcg_ctx.gen_opc_instr_start for instructions representing the
last guest opcode in the TB.

Cc: qemu-stable@nongnu.org
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 36f25d2537)

*modified to use older global version of gen_opc_instr_start

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 23:04:53 -06:00
Paolo Bonzini
ff0c079c14 buffered_file: do not send more than s->bytes_xfer bytes per tick
Sending more was possible if the buffer was large.

Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit bde54c08b4)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:38:00 -06:00
Paolo Bonzini
d745511fc9 migration: fix migration_bitmap leak
Cc: qemu-stable@nongnu.org
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Juan Quintela <quintela@redhat.com>
(cherry picked from commit 244eaa7514)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:37:38 -06:00
Michael Contreras
5afd0ecaa6 e1000: Discard oversized packets based on SBP|LPE
Discard packets longer than 16384 when !SBP to match the hardware behavior.

Signed-off-by: Michael Contreras <michael@inetric.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
(cherry picked from commit 2c0331f4f7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:37:17 -06:00
Uri Lublin
c4cd5b0f6d qxl+vnc: register a vm state change handler for dummy spice_server
When qxl + vnc are used, a dummy spice_server is initialized.
The spice_server has to be told when the VM runstate changes,
which is what this patch does.

Without it, from qxl_send_events(), the following error message is shown:
  qxl_send_events: spice-server bug: guest stopped, ignoring

Cc: qemu-stable@nongnu.org
Signed-off-by: Uri Lublin <uril@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 938b8a36b6)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:36:22 -06:00
Gerd Hoffmann
7ca2496588 qxl: save qemu_create_displaysurface_from result
Spotted by Coverity.

https://bugzilla.redhat.com/show_bug.cgi?id=885644

Cc: qemu-stable@nongnu.org
Reported-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 2f464b5a32)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:35:40 -06:00
Max Filippov
bfae9374f1 target-xtensa: fix ITLB/DTLB page protection flags
With MMU option xtensa architecture has two TLBs: ITLB and DTLB. ITLB is
only used for code access, DTLB is only for data. However TLB entries in
both TLBs have attribute field controlling write and exec access. These
bits need to be properly masked off depending on TLB type before being
used as tlb_set_page prot argument. Otherwise the following happens:

(1) ITLB entry for some PFN gets invalidated
(2) DTLB entry for the same PFN gets updated, attributes allow code
    execution
(3) code at the page with that PFN is executed (possible due to step 2),
    entry for the TB is written into the jump cache
(4) QEMU TLB entry for the PFN gets replaced with an entry for some
    other PFN
(5) code in the TB from step 3 is executed (possible due to jump cache)
    and it accesses data, for which there's no DTLB entry, causing DTLB
    miss exception
(6) re-translation of the TB from step 5 is attempted, but there's no
    QEMU TLB entry nor xtensa ITLB entry for that PFN, which causes ITLB
    miss exception at the TB start address
(7) ITLB miss exception is handled by the guest, but execution is
    resumed from the beginning of the faulting TB (the point where ITLB
    miss occured), not from the point where DTLB miss occured, which is
    wrong.

With that fix the above scenario causes ITLB miss exception (that used
to be step 7) at step 3, right at the beginning of the TB.

Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 659f807c0a)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:34:54 -06:00
Gerd Hoffmann
b68c48ff01 pixman: fix vnc tight png/jpeg support
This patch adds an x argument to qemu_pixman_linebuf_fill so it can
also be used to convert a partial scanline.  Then fix tight + png/jpeg
encoding by passing in the x+y offset, so the data is read from the
correct screen location instead of the upper left corner.

Cc: 1087974@bugs.launchpad.net
Cc: qemu-stable@nongnu.org
Reported-by: Tim Hardeneck <thardeck@suse.de>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit bc210eb163)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 22:33:57 -06:00
Gerd Hoffmann
36fd8179b6 Update seabios to a810e4e72a0d42c7bc04eda57382f8e019add901
git shortlog:

Kevin O'Connor (6):
      floppy: Minor - reduce handle_0e code size when CONFIG_FLOPPY is disabled.
      vga: Minor comment spelling fix.
      Don't recursively evaluate CFLAGS variables.
      Don't use gcc's -combine option.
      Add compile checking phase to build.
      acpi: Use prt_slot() macro to describe irq pins of first PCI device.

Laszlo Ersek (1):
      maininit(): print machine UUID under seabios version message

Paolo Bonzini (1):
      acpi: reintroduce LNKS

Paolo's patch fixes the FreeBSD boot failure.

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit 15faf946f7)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 10:47:39 -06:00
Gerd Hoffmann
0bc5f4ad63 seabios: update to e8a76b0f225bba5ba9d63ab227e0a37b3beb1059
This patch updates seabios to latest git master.  Changes:

  (1) q35 patches merged.
  (2) some acpi cleanups.
  (3) fixes irq 8 conflict.

(3) makes this a candidate for the stable branch

Cc: qemu-stable@nongnu.org
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
(cherry picked from commit ff1562908d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-15 10:46:57 -06:00
Alex Williamson
37e1428cc7 vfio-pci: Don't use kvm_irqchip_in_kernel
kvm_irqchip_in_kernel() has an architecture specific meaning, so
we shouldn't be using it to determine whether to enabled KVM INTx
bypass.  kvm_irqfds_enabled() seems most appropriate.  Also use this
to protect our other call to kvm_check_extension() as that explodes
when KVM isn't enabled.

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Cc: qemu-stable@nongnu.org
(cherry picked from commit d281084d3e)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:45:06 -06:00
Petar Jovanovic
518799a3e7 target-mips: Fix incorrect shift for SHILO and SHILOV
helper_shilo has not been shifting an accumulator value correctly for negative
values in 'shift' field. Minor optimization for shift=0 case.
This change also adds tests that will trigger issue and check for regressions.

Signed-off-by: Petar Jovanovic <petarj@mips.com>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Reviewed-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 19e6c50d2d)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:43:29 -06:00
Petar Jovanovic
16c5fe49de target-mips: Fix incorrect code and test for INSV
Content of register rs should be shifted for pos before applying a mask.
This change contains both fix for the instruction and to the existing test.

Signed-off-by: Petar Jovanovic <petarj@mips.com>
Reviewed-by: Eric Johnson <ericj@mips.com>
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 34f5606ee1)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:42:38 -06:00
David Gibson
f1a2195ec3 migration: Fix madvise breakage if host and guest have different page sizes
madvise(DONTNEED) will throw away the contents of the whole page at the
given address, even if the given length is less than the page size.  One
can argue about whether that's the correct behaviour, but that's what it's
done for a long time in Linux at least.

That means that the madvise() in ram_load(), on a setup where
TARGET_PAGE_SIZE is smaller than the host page size, can throw away data
in guest pages adjacent to the one it's actually processing right now,
leading to guest memory corruption on an incoming migration.

This patch therefore, disables the madvise() if the host page size is
larger than TARGET_PAGE_SIZE.  This means we don't get the benefits of that
madvise() in this case, but a more complete fix is more difficult to
accomplish.  This at least fixes the guest memory corruption.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Reported-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 45e6cee42b)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:41:11 -06:00
David Gibson
3b4fc1f9d2 Fix off-by-1 error in RAM migration code
The code for migrating (or savevm-ing) memory pages starts off by creating
a dirty bitmap and filling it with 1s.  Except, actually, because bit
addresses are 0-based it fills every bit except bit 0 with 1s and puts an
extra 1 beyond the end of the bitmap, potentially corrupting unrelated
memory.  Oops.  This patch fixes it.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Anthony Liguori <aliguori@us.ibm.com>
(cherry picked from commit 7ec81e56ed)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:40:38 -06:00
Brad Smith
d67d95f24e Disable semaphores fallback code for OpenBSD
Disable the semaphores fallback code for OpenBSD as modern OpenBSD
releases now have sem_timedwait().

Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit 927fa909d5)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:36:36 -06:00
Brad Smith
0a7ad69a0f Fix semaphores fallback code
As reported in bug 1087114 the semaphores fallback code is broken which
results in QEMU crashing and making QEMU unusable.

This patch is from Paolo.

This needs to be back ported to the 1.3 stable tree as well.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Brad Smith <brad@comstyle.com>
Signed-off-by: Blue Swirl <blauwirbel@gmail.com>
(cherry picked from commit a795ef8dcb)

Signed-off-by: Michael Roth <mdroth@linux.vnet.ibm.com>
2013-01-14 15:36:28 -06:00
2699 changed files with 201820 additions and 393493 deletions

142
.gitignore vendored
View File

@@ -1,65 +1,60 @@
/config-devices.*
/config-all-devices.*
/config-all-disas.*
/config-host.*
/config-target.*
/config.status
/trace/generated-tracers.h
/trace/generated-tracers.c
/trace/generated-tracers-dtrace.h
/trace/generated-tracers.dtrace
/trace/generated-events.h
/trace/generated-events.c
/trace/generated-ust-provider.h
/trace/generated-ust.c
/libcacard/trace/generated-tracers.c
config-devices.*
config-all-devices.*
config-host.*
config-target.*
trace.h
trace.c
trace-dtrace.h
trace-dtrace.dtrace
*-timestamp
/*-softmmu
/*-darwin-user
/*-linux-user
/*-bsd-user
/libdis*
/libuser
/linux-headers/asm
/qga/qapi-generated
/qapi-generated
/qapi-types.[ch]
/qapi-visit.[ch]
/qmp-commands.h
/qmp-marshal.c
/qemu-doc.html
/qemu-tech.html
/qemu-doc.info
/qemu-tech.info
/qemu.1
/qemu.pod
/qemu-img.1
/qemu-img.pod
/qemu-img
/qemu-nbd
/qemu-nbd.8
/qemu-nbd.pod
/qemu-options.def
/qemu-options.texi
/qemu-img-cmds.texi
/qemu-img-cmds.h
/qemu-io
/qemu-ga
/qemu-bridge-helper
/qemu-monitor.texi
/qmp-commands.txt
/vscclient
/fsdev/virtfs-proxy-helper
/fsdev/virtfs-proxy-helper.1
/fsdev/virtfs-proxy-helper.pod
*-softmmu
*-darwin-user
*-linux-user
*-bsd-user
libdis*
libuser
linux-headers/asm
qapi-generated
qapi-types.[ch]
qapi-visit.[ch]
qmp-commands.h
qmp-marshal.c
qemu-doc.html
qemu-tech.html
qemu-doc.info
qemu-tech.info
qemu.1
qemu.pod
qemu-img.1
qemu-img.pod
qemu-img
qemu-nbd
qemu-nbd.8
qemu-nbd.pod
qemu-options.def
qemu-options.texi
qemu-img-cmds.texi
qemu-img-cmds.h
qemu-io
qemu-ga
qemu-bridge-helper
qemu-monitor.texi
vscclient
QMP/qmp-commands.txt
test-coroutine
test-qmp-input-visitor
test-qmp-output-visitor
test-string-input-visitor
test-string-output-visitor
test-visitor-serialization
fsdev/virtfs-proxy-helper.1
fsdev/virtfs-proxy-helper.pod
.gdbinit
*.a
*.aux
*.cp
*.dvi
*.exe
*.dll
*.so
*.mo
*.fn
*.ky
*.log
@@ -73,31 +68,26 @@
*.tp
*.vr
*.d
!/scripts/qemu-guest-agent/fsfreeze-hook.d
*.o
*.lo
*.la
*.pc
.libs
.sdk
*.gcda
*.gcno
/pc-bios/bios-pq/status
/pc-bios/vgabios-pq/status
/pc-bios/optionrom/linuxboot.asm
/pc-bios/optionrom/linuxboot.bin
/pc-bios/optionrom/linuxboot.raw
/pc-bios/optionrom/linuxboot.img
/pc-bios/optionrom/multiboot.asm
/pc-bios/optionrom/multiboot.bin
/pc-bios/optionrom/multiboot.raw
/pc-bios/optionrom/multiboot.img
/pc-bios/optionrom/kvmvapic.asm
/pc-bios/optionrom/kvmvapic.bin
/pc-bios/optionrom/kvmvapic.raw
/pc-bios/optionrom/kvmvapic.img
/pc-bios/s390-ccw/s390-ccw.elf
/pc-bios/s390-ccw/s390-ccw.img
*.swp
*.orig
.pc
patches
pc-bios/bios-pq/status
pc-bios/vgabios-pq/status
pc-bios/optionrom/linuxboot.bin
pc-bios/optionrom/linuxboot.raw
pc-bios/optionrom/linuxboot.img
pc-bios/optionrom/multiboot.bin
pc-bios/optionrom/multiboot.raw
pc-bios/optionrom/multiboot.img
pc-bios/optionrom/kvmvapic.bin
pc-bios/optionrom/kvmvapic.raw
pc-bios/optionrom/kvmvapic.img
.stgit-*
cscope.*
tags

20
.gitmodules vendored
View File

@@ -1,30 +1,24 @@
[submodule "roms/vgabios"]
path = roms/vgabios
url = git://git.qemu-project.org/vgabios.git/
url = git://git.qemu.org/vgabios.git/
[submodule "roms/seabios"]
path = roms/seabios
url = git://git.qemu-project.org/seabios.git/
url = git://git.qemu.org/seabios.git/
[submodule "roms/SLOF"]
path = roms/SLOF
url = git://git.qemu-project.org/SLOF.git
url = git://git.qemu.org/SLOF.git
[submodule "roms/ipxe"]
path = roms/ipxe
url = git://git.qemu-project.org/ipxe.git
url = git://git.qemu.org/ipxe.git
[submodule "roms/openbios"]
path = roms/openbios
url = git://git.qemu-project.org/openbios.git
[submodule "roms/openhackware"]
path = roms/openhackware
url = git://git.qemu-project.org/openhackware.git
url = git://git.qemu.org/openbios.git
[submodule "roms/qemu-palcode"]
path = roms/qemu-palcode
url = git://github.com/rth7680/qemu-palcode.git
url = git://repo.or.cz/qemu-palcode.git
[submodule "roms/sgabios"]
path = roms/sgabios
url = git://git.qemu-project.org/sgabios.git
url = git://git.qemu.org/sgabios.git
[submodule "pixman"]
path = pixman
url = git://anongit.freedesktop.org/pixman
[submodule "dtc"]
path = dtc
url = git://git.qemu-project.org/dtc.git

View File

@@ -2,8 +2,7 @@
# into proper addresses so that they are counted properly in git shortlog output.
#
Andrzej Zaborowski <balrogg@gmail.com> balrog <balrog@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Anthony Liguori <anthony@codemonkey.ws> Anthony Liguori <aliguori@us.ibm.com>
Anthony Liguori <aliguori@us.ibm.com> aliguori <aliguori@c046a42c-6fe2-441c-8c8c-71466251a162>
Aurelien Jarno <aurelien@aurel32.net> aurel32 <aurel32@c046a42c-6fe2-441c-8c8c-71466251a162>
Blue Swirl <blauwirbel@gmail.com> blueswir1 <blueswir1@c046a42c-6fe2-441c-8c8c-71466251a162>
Edgar E. Iglesias <edgar.iglesias@gmail.com> edgar_igl <edgar_igl@c046a42c-6fe2-441c-8c8c-71466251a162>

View File

@@ -1,81 +0,0 @@
language: c
python:
- "2.4"
compiler:
- gcc
- clang
notifications:
irc:
channels:
- "irc.oftc.net#qemu"
on_success: change
on_failure: always
env:
global:
- TEST_CMD="make check"
- EXTRA_CONFIG=""
# Development packages, EXTRA_PKGS saved for additional builds
- CORE_PKGS="libusb-1.0-0-dev libiscsi-dev librados-dev libncurses5-dev"
- NET_PKGS="libseccomp-dev libgnutls-dev libssh2-1-dev libspice-server-dev libspice-protocol-dev libnss3-dev"
- GUI_PKGS="libgtk-3-dev libvte-2.90-dev libsdl1.2-dev libpng12-dev libpixman-1-dev"
- EXTRA_PKGS=""
matrix:
- TARGETS=alpha-softmmu,alpha-linux-user
- TARGETS=arm-softmmu,arm-linux-user
- TARGETS=aarch64-softmmu,aarch64-linux-user
- TARGETS=cris-softmmu
- TARGETS=i386-softmmu,x86_64-softmmu
- TARGETS=lm32-softmmu
- TARGETS=m68k-softmmu
- TARGETS=microblaze-softmmu,microblazeel-softmmu
- TARGETS=mips-softmmu,mips64-softmmu,mips64el-softmmu,mipsel-softmmu
- TARGETS=moxie-softmmu
- TARGETS=or32-softmmu,
- TARGETS=ppc-softmmu,ppc64-softmmu,ppcemb-softmmu
- TARGETS=s390x-softmmu
- TARGETS=sh4-softmmu,sh4eb-softmmu
- TARGETS=sparc-softmmu,sparc64-softmmu
- TARGETS=unicore32-softmmu
- TARGETS=xtensa-softmmu,xtensaeb-softmmu
before_install:
- git submodule update --init --recursive
- sudo apt-get update -qq
- sudo apt-get install -qq ${CORE_PKGS} ${NET_PKGS} ${GUI_PKGS} ${EXTRA_PKGS}
script: "./configure --target-list=${TARGETS} ${EXTRA_CONFIG} && make && ${TEST_CMD}"
matrix:
# We manually include a number of additional build for non-standard bits
include:
# Debug related options
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-debug --enable-tcg-interpreter"
compiler: gcc
# All the extra -dev packages
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="libaio-dev libcap-ng-dev libattr1-dev libbrlapi-dev uuid-dev libusb-1.0.0-dev"
compiler: gcc
# Currently configure doesn't force --disable-pie
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-gprof --enable-gcov --disable-pie"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="sparse"
EXTRA_CONFIG="--enable-sparse"
compiler: gcc
# All the trace backends (apart from dtrace)
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=stderr"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=simple"
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_CONFIG="--enable-trace-backend=ftrace"
TEST_CMD=""
compiler: gcc
- env: TARGETS=i386-softmmu,x86_64-softmmu
EXTRA_PKGS="liblttng-ust-dev liburcu-dev"
EXTRA_CONFIG="--enable-trace-backend=ust"
compiler: gcc

View File

@@ -84,10 +84,3 @@ and clarity it comes on a line by itself:
Rationale: a consistent (except for functions...) bracing style reduces
ambiguity and avoids needless churn when lines are added or removed.
Furthermore, it is the QEMU coding style.
5. Declarations
Mixed declarations (interleaving statements and declarations within blocks)
are not allowed; declarations should be at the beginning of blocks. In other
words, the code should not generate warnings if using GCC's
-Wdeclaration-after-statement option.

View File

@@ -1,6 +1,6 @@
This file documents changes for QEMU releases 0.12 and earlier.
For changelog information for later releases, see
http://wiki.qemu-project.org/ChangeLog or look at the git history for
http://wiki.qemu.org/ChangeLog or look at the git history for
more detailed information.

46
HACKING
View File

@@ -40,23 +40,8 @@ speaking, the size of guest memory can always fit into ram_addr_t but
it would not be correct to store an actual guest physical address in a
ram_addr_t.
For CPU virtual addresses there are several possible types.
vaddr is the best type to use to hold a CPU virtual address in
target-independent code. It is guaranteed to be large enough to hold a
virtual address for any target, and it does not change size from target
to target. It is always unsigned.
target_ulong is a type the size of a virtual address on the CPU; this means
it may be 32 or 64 bits depending on which target is being built. It should
therefore be used only in target-specific code, and in some
performance-critical built-per-target core code such as the TLB code.
There is also a signed version, target_long.
abi_ulong is for the *-user targets, and represents a type the size of
'void *' in that target's ABI. (This may not be the same as the size of a
full CPU virtual address in the case of target ABIs which use 32 bit pointers
on 64 bit CPUs, like sparc32plus.) Definitions of structures that must match
the target's ABI must use this type for anything that on the target is defined
to be an 'unsigned long' or a pointer type.
There is also a signed version, abi_long.
Use target_ulong (or abi_ulong) for CPU virtual addresses, however
devices should not need to use target_ulong.
Of course, take all of the above with a grain of salt. If you're about
to use some system interface that requires a type like size_t, pid_t or
@@ -93,15 +78,16 @@ avoided.
Use of the malloc/free/realloc/calloc/valloc/memalign/posix_memalign
APIs is not allowed in the QEMU codebase. Instead of these routines,
use the GLib memory allocation routines g_malloc/g_malloc0/g_new/
g_new0/g_realloc/g_free or QEMU's qemu_memalign/qemu_blockalign/qemu_vfree
g_new0/g_realloc/g_free or QEMU's qemu_vmalloc/qemu_memalign/qemu_vfree
APIs.
Please note that g_malloc will exit on allocation failure, so there
is no need to test for failure (as you would have to with malloc).
Calling g_malloc with a zero size is valid and will return NULL.
Memory allocated by qemu_memalign or qemu_blockalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32.
Memory allocated by qemu_vmalloc or qemu_memalign must be freed with
qemu_vfree, since breaking this will cause problems on Win32 and user
emulators.
4. String manipulation
@@ -137,23 +123,3 @@ gcc's printf attribute directive in the prototype.
This makes it so gcc's -Wformat and -Wformat-security options can do
their jobs and cross-check format strings with the number and types
of arguments.
6. C standard, implementation defined and undefined behaviors
C code in QEMU should be written to the C99 language specification. A copy
of the final version of the C99 standard with corrigenda TC1, TC2, and TC3
included, formatted as a draft, can be downloaded from:
http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf
The C language specification defines regions of undefined behavior and
implementation defined behavior (to give compiler authors enough leeway to
produce better code). In general, code in QEMU should follow the language
specification and avoid both undefined and implementation defined
constructs. ("It works fine on the gcc I tested it with" is not a valid
argument...) However there are a few areas where we allow ourselves to
assume certain behaviors because in practice all the platforms we care about
behave in the same way and writing strictly conformant code would be
painful. These are:
* you may assume that integers are 2s complement representation
* you may assume that right shift of a signed integer duplicates
the sign bit (ie it is an arithmetic shift, not a logical shift)

15
LICENSE
View File

@@ -1,21 +1,16 @@
The following points clarify the QEMU license:
1) QEMU as a whole is released under the GNU General Public License,
version 2.
1) QEMU as a whole is released under the GNU General Public License
2) Parts of QEMU have specific licenses which are compatible with the
GNU General Public License, version 2. Hence each source file contains
its own licensing information. Source files with no licensing information
are released under the GNU General Public License, version 2 or (at your
option) any later version.
GNU General Public License. Hence each source file contains its own
licensing information.
As of July 2013, contributions under version 2 of the GNU General Public
License (and no later version) are only accepted for the following files
or directories: bsd-user/, linux-user/, hw/misc/vfio.c, hw/xen/xen_pt*.
Many hardware device emulation sources are released under the BSD license.
3) The Tiny Code Generator (TCG) is released under the BSD license
(see license headers in files).
4) QEMU is a trademark of Fabrice Bellard.
Fabrice Bellard and the QEMU team
Fabrice Bellard.

View File

@@ -50,14 +50,8 @@ Descriptions of section entries:
General Project Administration
------------------------------
M: Anthony Liguori <aliguori@amazon.com>
Responsible Disclosure, Reporting Security Issues
------------------------------
W: http://wiki.qemu.org/SecurityProcess
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@amazon.com>
L: secalert@redhat.com
M: Anthony Liguori <aliguori@us.ibm.com>
M: Paul Brook <paul@codesourcery.com>
Guest CPU cores (TCG):
----------------------
@@ -65,131 +59,98 @@ Alpha
M: Richard Henderson <rth@twiddle.net>
S: Maintained
F: target-alpha/
F: hw/alpha/
ARM
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: target-arm/
F: hw/arm/
F: hw/cpu/a*mpcore.c
CRIS
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: target-cris/
F: hw/cris/
LM32
M: Michael Walle <michael@walle.cc>
S: Maintained
F: target-lm32/
F: hw/lm32/
F: hw/char/lm32_*
M68K
S: Orphan
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
F: target-m68k/
F: hw/m68k/
MicroBlaze
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: target-microblaze/
F: hw/microblaze/
MIPS
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
F: target-mips/
F: hw/mips/
Moxie
M: Anthony Green <green@moxielogic.com>
S: Maintained
F: target-moxie/
OpenRISC
M: Jia Liu <proljc@gmail.com>
S: Maintained
F: target-openrisc/
F: hw/openrisc/
PowerPC
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: target-ppc/
F: hw/ppc/
S390
M: Richard Henderson <rth@twiddle.net>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: target-s390x/
F: hw/s390x/
SH4
M: Aurelien Jarno <aurelien@aurel32.net>
S: Odd Fixes
F: target-sh4/
F: hw/sh4/
SPARC
M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: target-sparc/
F: hw/sparc/
F: hw/sparc64/
UniCore32
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: target-unicore32/
F: hw/unicore32/
X86
M: qemu-devel@nongnu.org
S: Odd Fixes
F: target-i386/
F: hw/i386/
Xtensa
M: Max Filippov <jcmvbkbc@gmail.com>
W: http://wiki.osll.spb.ru/doku.php?id=etc:users:jcmvbkbc:qemu-target-xtensa
S: Maintained
F: target-xtensa/
F: hw/xtensa/
Guest CPU Cores (KVM):
----------------------
Overall
M: Paolo Bonzini <pbonzini@redhat.com>
M: Avi Kivity <avi@redhat.com>
M: Marcelo Tosatti <mtosatti@redhat.com>
L: kvm@vger.kernel.org
S: Supported
F: kvm-*
F: */kvm.*
ARM
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: target-arm/kvm.c
PPC
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: target-ppc/kvm.c
S390
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: target-s390x/kvm.c
F: hw/intc/s390_flic.[hc]
X86
M: Avi Kivity <avi@redhat.com>
M: Marcelo Tosatti <mtosatti@redhat.com>
L: kvm@vger.kernel.org
S: Supported
@@ -227,171 +188,162 @@ F: *win32*
ARM Machines
------------
Allwinner-a10
M: Li Guang <lig.fnst@cn.fujitsu.com>
S: Maintained
F: hw/*/allwinner-a10*
F: include/hw/*/allwinner-a10*
F: hw/arm/cubieboard.c
Exynos
M: Evgeny Voevodin <e.voevodin@samsung.com>
M: Maksim Kozlov <m.kozlov@samsung.com>
M: Igor Mitsyanko <i.mitsyanko@gmail.com>
M: Igor Mitsyanko <i.mitsyanko@samsung.com>
M: Dmitry Solodkiy <d.solodkiy@samsung.com>
S: Maintained
F: hw/*/exynos*
F: hw/exynos*
Calxeda Highbank
M: Mark Langsdorf <mark.langsdorf@calxeda.com>
S: Supported
F: hw/arm/highbank.c
F: hw/net/xgmac.c
Canon DIGIC
M: Antony Pavlov <antonynpavlov@gmail.com>
S: Maintained
F: include/hw/arm/digic.h
F: hw/*/digic*
F: hw/highbank.c
F: hw/xgmac.c
Gumstix
M: qemu-devel@nongnu.org
S: Orphan
F: hw/arm/gumstix.c
F: hw/gumstix.c
i.MX31
M: Peter Chubb <peter.chubb@nicta.com.au>
S: Odd fixes
F: hw/*/imx*
F: hw/arm/kzm.c
F: hw/imx*
F: hw/kzm.c
Integrator CP
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/arm/integratorcp.c
F: hw/integratorcp.c
Mainstone
M: qemu-devel@nongnu.org
S: Orphan
F: hw/arm/mainstone.c
F: hw/mainstone.c
Musicpal
M: Jan Kiszka <jan.kiszka@web.de>
S: Maintained
F: hw/arm/musicpal.c
F: hw/musicpal.c
nSeries
M: Andrzej Zaborowski <balrogg@gmail.com>
S: Maintained
F: hw/arm/nseries.c
F: hw/nseries.c
Palm
M: Andrzej Zaborowski <balrogg@gmail.com>
S: Maintained
F: hw/arm/palm.c
F: hw/palm.c
Real View
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/arm/realview*
F: hw/realview*
Spitz
M: Andrzej Zaborowski <balrogg@gmail.com>
S: Maintained
F: hw/arm/spitz.c
F: hw/spitz.c
Stellaris
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/*/stellaris*
F: hw/stellaris.c
Versatile PB
M: Paul Brook <paul@codesourcery.com>
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/*/versatile*
F: hw/versatilepb.c
Xilinx Zynq
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/arm/xilinx_zynq.c
F: hw/misc/zynq_slcr.c
F: hw/*/cadence_*
F: hw/ssi/xilinx_spips.c
F: hw/xilinx_zynq.c
F: hw/zynq_slcr.c
F: hw/cadence_*
F: hw/xilinx_spips.c
CRIS Machines
-------------
Axis Dev88
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/cris/axis_dev88.c
F: hw/*/etraxfs_*.c
F: hw/axis_dev88.c
etraxfs
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/etraxfs.c
LM32 Machines
-------------
EVR32 and uclinux BSP
M: Michael Walle <michael@walle.cc>
S: Maintained
F: hw/lm32/lm32_boards.c
F: hw/lm32_boards.c
milkymist
M: Michael Walle <michael@walle.cc>
S: Maintained
F: hw/lm32/milkymist.c
F: hw/milkymist.c
M68K Machines
-------------
an5206
S: Orphan
F: hw/m68k/an5206.c
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/an5206.c
dummy_m68k
S: Orphan
F: hw/m68k/dummy_m68k.c
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/dummy_m68k.c
mcf5208
S: Orphan
F: hw/m68k/mcf5208.c
M: Paul Brook <paul@codesourcery.com>
S: Maintained
F: hw/mcf5208.c
MicroBlaze Machines
-------------------
petalogix_s3adsp1800
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/microblaze/petalogix_s3adsp1800_mmu.c
F: hw/petalogix_s3adsp1800.c
petalogix_ml605
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/microblaze/petalogix_ml605_mmu.c
F: hw/petalogix_ml605_mmu.c
MIPS Machines
-------------
Jazz
M: Hervé Poussineau <hpoussin@reactos.org>
S: Maintained
F: hw/mips/mips_jazz.c
F: hw/mips_jazz.c
Malta
M: Aurelien Jarno <aurelien@aurel32.net>
S: Maintained
F: hw/mips/mips_malta.c
F: hw/mips_malta.c
Mipssim
M: qemu-devel@nongnu.org
S: Orphan
F: hw/mips/mips_mipssim.c
F: hw/mips_mipssim.c
R4000
M: Aurelien Jarno <aurelien@aurel32.net>
S: Maintained
F: hw/mips/mips_r4k.c
OpenRISC Machines
-----------------
or1k-sim
M: Jia Liu <proljc@gmail.com>
S: Maintained
F: hw/openrisc/openrisc_sim.c
F: hw/mips_r4k.c
PowerPC Machines
----------------
@@ -399,13 +351,13 @@ PowerPC Machines
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/ppc405_boards.c
F: hw/ppc405_boards.c
Bamboo
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/ppc440_bamboo.c
F: hw/ppc440_bamboo.c
e500
M: Alexander Graf <agraf@suse.de>
@@ -421,262 +373,214 @@ M: Scott Wood <scottwood@freescale.com>
L: qemu-ppc@nongnu.org
S: Supported
F: hw/ppc/mpc8544ds.c
F: hw/ppc/mpc8544_guts.c
F: hw/mpc8544_guts.c
New World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/mac_newworld.c
F: hw/pci-host/uninorth.c
F: hw/pci-bridge/dec.[hc]
F: hw/misc/macio/
F: hw/ppc_newworld.c
F: hw/unin_pci.c
F: hw/dec_pci.[hc]
Old World
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Maintained
F: hw/ppc/mac_oldworld.c
F: hw/pci-host/grackle.c
F: hw/misc/macio/
F: hw/ppc_oldworld.c
F: hw/grackle_pci.c
PReP
M: Andreas Färber <andreas.faerber@web.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/prep.c
F: hw/pci-host/prep.[hc]
F: hw/isa/pc87312.[hc]
F: hw/ppc_prep.c
F: hw/prep_pci.[hc]
sPAPR
M: David Gibson <david@gibson.dropbear.id.au>
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Supported
F: hw/*/spapr*
F: include/hw/*/spapr*
F: hw/*/xics*
F: include/hw/*/xics*
F: pc-bios/spapr-rtas/*
F: hw/spapr*
virtex_ml507
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/virtex_ml507.c
F: hw/virtex_ml507.c
SH4 Machines
------------
R2D
M: Magnus Damm <magnus.damm@gmail.com>
S: Maintained
F: hw/sh4/r2d.c
F: hw/r2d.c
Shix
M: Magnus Damm <magnus.damm@gmail.com>
S: Orphan
F: hw/sh4/shix.c
F: hw/shix.c
SPARC Machines
--------------
Sun4m
M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: hw/sparc/sun4m.c
F: hw/sun4m.c
Sun4u
M: Blue Swirl <blauwirbel@gmail.com>
S: Maintained
F: hw/sparc64/sun4u.c
F: hw/sun4u.c
Leon3
M: Fabien Chouteau <chouteau@adacore.com>
S: Maintained
F: hw/sparc/leon3.c
F: hw/*/grlib*
F: hw/leon3.c
F: hw/grlib*
S390 Machines
-------------
S390 Virtio
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: hw/s390x/s390-*.c
S390 Virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
M: Alexander Graf <agraf@suse.de>
S: Supported
F: hw/s390x/s390-virtio-ccw.c
F: hw/s390x/css.[hc]
F: hw/s390x/sclp*.[hc]
F: hw/s390x/ipl*.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
F: hw/s390-*.c
UniCore32 Machines
-------------
PKUnity-3 SoC initramfs-with-busybox
M: Guan Xuetao <gxt@mprc.pku.edu.cn>
S: Maintained
F: hw/*/puv3*
F: hw/puv3*
F: hw/unicore32/
X86 Machines
------------
PC
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: include/hw/i386/
F: hw/i386/
F: hw/pci-host/piix.c
F: hw/pci-host/q35.c
F: hw/pci-host/pam.c
F: include/hw/pci-host/q35.h
F: include/hw/pci-host/pam.h
F: hw/isa/piix4.c
F: hw/isa/lpc_ich9.c
F: hw/i2c/smbus_ich9.c
F: hw/acpi/piix4.c
F: hw/acpi/ich9.c
F: include/hw/acpi/ich9.h
F: include/hw/acpi/piix.h
F: hw/pc.[ch]
F: hw/pc_piix.c
Xtensa Machines
---------------
sim
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa/xtensa_sim.c
F: hw/xtensa_sim.c
Avnet LX60
M: Max Filippov <jcmvbkbc@gmail.com>
S: Maintained
F: hw/xtensa/xtensa_lx60.c
F: hw/xtensa_lx60.c
Devices
-------
IDE
M: Kevin Wolf <kwolf@redhat.com>
S: Odd Fixes
F: include/hw/ide.h
F: hw/ide/
OMAP
M: Peter Maydell <peter.maydell@linaro.org>
S: Maintained
F: hw/*/omap*
F: hw/omap*
PCI
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: include/hw/pci/*
F: hw/pci/*
F: hw/acpi/*
F: hw/pci*
F: hw/piix*
ppc4xx
M: Alexander Graf <agraf@suse.de>
L: qemu-ppc@nongnu.org
S: Odd Fixes
F: hw/ppc/ppc4*.c
F: hw/ppc4xx*.[hc]
ppce500
M: Alexander Graf <agraf@suse.de>
M: Scott Wood <scottwood@freescale.com>
L: qemu-ppc@nongnu.org
S: Supported
F: hw/ppc/e500*
F: hw/ppce500_*
SCSI
M: Paolo Bonzini <pbonzini@redhat.com>
S: Supported
F: include/hw/scsi*
F: hw/scsi/*
F: hw/virtio-scsi.*
F: hw/scsi*
T: git git://github.com/bonzini/qemu.git scsi-next
LSI53C895A
S: Orphan
F: hw/scsi/lsi53c895a.c
M: Paul Brook <paul@codesourcery.com>
S: Odd Fixes
F: hw/lsi53c895a.c
SSI
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
S: Maintained
F: hw/ssi/*
F: hw/block/m25p80.c
F: hw/ssi.*
F: hw/m25p80.c
USB
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: hw/usb/*
F: tests/usb-hcd-ehci-test.c
F: hw/usb*
VFIO
M: Alex Williamson <alex.williamson@redhat.com>
S: Supported
F: hw/misc/vfio.c
F: hw/vfio*
vhost
M: Michael S. Tsirkin <mst@redhat.com>
S: Supported
F: hw/*/*vhost*
F: hw/vhost*
virtio
M: Anthony Liguori <aliguori@amazon.com>
M: Michael S. Tsirkin <mst@redhat.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: hw/*/virtio*
F: hw/virtio*
virtio-9p
M: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
S: Supported
F: hw/9pfs/
F: fsdev/
F: tests/virtio-9p-test.c
T: git git://github.com/kvaneesh/QEMU.git
virtio-blk
M: Kevin Wolf <kwolf@redhat.com>
M: Stefan Hajnoczi <stefanha@redhat.com>
S: Supported
F: hw/block/virtio-blk.c
virtio-ccw
M: Cornelia Huck <cornelia.huck@de.ibm.com>
M: Christian Borntraeger <borntraeger@de.ibm.com>
S: Supported
F: hw/s390x/virtio-ccw.[hc]
T: git git://github.com/cohuck/qemu virtio-ccw-upstr
F: hw/virtio-blk*
virtio-serial
M: Amit Shah <amit.shah@redhat.com>
S: Supported
F: hw/char/virtio-serial-bus.c
F: hw/char/virtio-console.c
nvme
M: Keith Busch <keith.busch@intel.com>
S: Supported
F: hw/block/nvme*
F: tests/nvme-test.c
F: hw/virtio-serial*
F: hw/virtio-console*
Xilinx EDK
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Edgar E. Iglesias <edgar.iglesias@gmail.com>
S: Maintained
F: hw/*/xilinx_*
F: include/hw/xilinx.h
F: hw/xilinx_axi*
F: hw/xilinx_uartlite.c
F: hw/xilinx_intc.c
F: hw/xilinx_ethlite.c
F: hw/xilinx_timer.c
F: hw/xilinx.h
F: hw/xilinx_spi.c
Subsystems
----------
Audio
M: Vassili Karpov (malc) <av1474@comtv.ru>
M: Gerd Hoffmann <kraxel@redhat.com>
S: Maintained
F: audio/
F: hw/audio/
F: tests/ac97-test.c
F: tests/es1370-test.c
F: tests/intel-hda-test.c
Block
M: Kevin Wolf <kwolf@redhat.com>
@@ -684,14 +588,9 @@ M: Stefan Hajnoczi <stefanha@redhat.com>
S: Supported
F: block*
F: block/
F: hw/block/
F: qemu-img*
F: qemu-io*
T: git git://repo.or.cz/qemu/kevin.git block
T: git git://github.com/stefanha/qemu.git block
Character Devices
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: qemu-char.c
@@ -699,20 +598,13 @@ CPU
M: Andreas Färber <afaerber@suse.de>
S: Supported
F: qom/cpu.c
F: include/qom/cpu.h
F: target-i386/cpu.c
ICC Bus
M: Igor Mammedov <imammedo@redhat.com>
S: Supported
F: include/hw/cpu/icc_bus.h
F: hw/cpu/icc_bus.c
F: include/qemu/cpu.h
Device Tree
M: Peter Crosthwaite <peter.crosthwaite@xilinx.com>
M: Peter Crosthwaite <peter.crosthwaite@petalogix.com>
M: Alexander Graf <agraf@suse.de>
S: Maintained
F: device_tree.[ch]
F: device-tree.[ch]
GDB stub
M: qemu-devel@nongnu.org
@@ -723,51 +615,39 @@ F: gdb-xml/
SPICE
M: Gerd Hoffmann <kraxel@redhat.com>
S: Supported
F: include/ui/qemu-spice.h
F: ui/qemu-spice.h
F: ui/spice-*.c
F: audio/spiceaudio.c
F: hw/display/qxl*
F: hw/qxl*
Graphics
M: Anthony Liguori <aliguori@amazon.com>
M: Gerd Hoffmann <kraxel@redhat.com>
S: Odd Fixes
M: Anthony Liguori <aliguori@us.ibm.com>
S: Maintained
F: ui/
Cocoa graphics
M: Andreas Färber <andreas.faerber@web.de>
M: Peter Maydell <peter.maydell@linaro.org>
S: Odd Fixes
F: ui/cocoa.m
Main loop
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
S: Supported
F: vl.c
Human Monitor (HMP)
Monitor (QMP/HMP)
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: monitor.c
F: hmp.c
F: hmp-commands.hx
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
Network device layer
M: Anthony Liguori <aliguori@amazon.com>
M: Anthony Liguori <aliguori@us.ibm.com>
M: Stefan Hajnoczi <stefanha@redhat.com>
S: Maintained
F: net/
T: git git://github.com/stefanha/qemu.git net
Netmap network backend
M: Luigi Rizzo <rizzo@iet.unipi.it>
M: Giuseppe Lettieri <g.lettieri@iet.unipi.it>
M: Vincenzo Maffione <v.maffione@gmail.com>
W: http://info.iet.unipi.it/~luigi/netmap/
S: Maintained
F: net/netmap.c
Network Block Device (NBD)
M: Paolo Bonzini <pbonzini@redhat.com>
S: Odd Fixes
@@ -776,41 +656,6 @@ F: nbd.*
F: qemu-nbd.c
T: git git://github.com/bonzini/qemu.git nbd-next
QAPI
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Michael Roth <mdroth@linux.vnet.ibm.com>
S: Maintained
F: qapi/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QAPI Schema
M: Eric Blake <eblake@redhat.com>
M: Luiz Capitulino <lcapitulino@redhat.com>
M: Markus Armbruster <armbru@redhat.com>
S: Supported
F: qapi-schema.json
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
QOM
M: Anthony Liguori <aliguori@amazon.com>
M: Andreas Färber <afaerber@suse.de>
S: Supported
T: git git://github.com/afaerber/qemu-cpu.git qom-next
F: include/qom/
X: include/qom/cpu.h
F: qom/
X: qom/cpu.c
F: tests/qom-test.c
QMP
M: Luiz Capitulino <lcapitulino@redhat.com>
S: Maintained
F: qmp.c
F: monitor.c
F: qmp-commands.hx
F: QMP/
T: git git://repo.or.cz/qemu/qmp-unstable.git queue/qmp
SLIRP
M: Jan Kiszka <jan.kiszka@siemens.com>
S: Maintained
@@ -831,12 +676,6 @@ M: Blue Swirl <blauwirbel@gmail.com>
S: Odd Fixes
F: scripts/checkpatch.pl
Seccomp
M: Eduardo Otubo <otubo@linux.vnet.ibm.com>
S: Supported
F: qemu-seccomp.c
F: include/sysemu/seccomp.h
Usermode Emulation
------------------
BSD user
@@ -853,21 +692,19 @@ Tiny Code Generator (TCG)
-------------------------
Common code
M: qemu-devel@nongnu.org
M: Richard Henderson <rth@twiddle.net>
S: Maintained
F: tcg/
AArch64 target
M: Claudio Fontana <claudio.fontana@huawei.com>
M: Claudio Fontana <claudio.fontana@gmail.com>
S: Maintained
F: tcg/aarch64/
ARM target
M: Andrzej Zaborowski <balrogg@gmail.com>
S: Maintained
F: tcg/arm/
HPPA target
M: Richard Henderson <rth@twiddle.net>
S: Maintained
F: tcg/hppa/
i386 target
M: qemu-devel@nongnu.org
S: Maintained
@@ -908,73 +745,25 @@ TCI target
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: tcg/tci/
F: tci.c
Stable branches
---------------
Stable 1.0
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-1.0.git
T: git git://git.qemu.org/qemu-stable-1.0.git
S: Orphan
Stable 0.15
L: qemu-stable@nongnu.org
M: Andreas Färber <afaerber@suse.de>
T: git git://git.qemu-project.org/qemu-stable-0.15.git
S: Supported
T: git git://git.qemu.org/qemu-stable-0.15.git
S: Orphan
Stable 0.14
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-0.14.git
T: git git://git.qemu.org/qemu-stable-0.14.git
S: Orphan
Stable 0.10
L: qemu-stable@nongnu.org
T: git git://git.qemu-project.org/qemu-stable-0.10.git
T: git git://git.qemu.org/qemu-stable-0.10.git
S: Orphan
Block drivers
-------------
VMDK
M: Fam Zheng <famz@redhat.com>
S: Supported
F: block/vmdk.c
RBD
M: Josh Durgin <josh.durgin@inktank.com>
S: Supported
F: block/rbd.c
Sheepdog
M: MORITA Kazutaka <morita.kazutaka@lab.ntt.co.jp>
M: Liu Yuan <namei.unix@gmail.com>
L: sheepdog@lists.wpkg.org
S: Supported
F: block/sheepdog.c
VHDX
M: Jeff Cody <jcody@redhat.com>
S: Supported
F: block/vhdx*
VDI
M: Stefan Weil <sw@weilnetz.de>
S: Maintained
F: block/vdi.c
iSCSI
M: Ronnie Sahlberg <ronniesahlberg@gmail.com>
M: Paolo Bonzini <pbonzini@redhat.com>
M: Peter Lieven <pl@kamp.de>
S: Supported
F: block/iscsi.c
NFS
M: Peter Lieven <pl@kamp.de>
S: Maintained
F: block/nfs.c
SSH
M: Richard W.M. Jones <rjones@redhat.com>
S: Supported
F: block/ssh.c

321
Makefile
View File

@@ -19,23 +19,10 @@ seems to have been used for an in-tree build. You can fix this by running \
endif
endif
CONFIG_SOFTMMU := $(if $(filter %-softmmu,$(TARGET_DIRS)),y)
CONFIG_USER_ONLY := $(if $(filter %-user,$(TARGET_DIRS)),y)
CONFIG_ALL=y
-include config-all-devices.mak
-include config-all-disas.mak
include $(SRC_PATH)/rules.mak
config-host.mak: $(SRC_PATH)/configure
@echo $@ is out-of-date, running configure
@# TODO: The next lines include code which supports a smooth
@# transition from old configurations without config.status.
@# This code can be removed after QEMU 1.7.
@if test -x config.status; then \
./config.status; \
else \
sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh; \
fi
@sed -n "/.*Configured with/s/[^:]*: //p" $@ | sh
else
config-host.mak:
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
@@ -44,23 +31,12 @@ ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))
endif
endif
GENERATED_HEADERS = config-host.h qemu-options.def
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c
GENERATED_HEADERS += trace/generated-events.h
GENERATED_SOURCES += trace/generated-events.c
GENERATED_HEADERS += trace/generated-tracers.h
GENERATED_HEADERS = config-host.h trace.h qemu-options.def
ifeq ($(TRACE_BACKEND),dtrace)
GENERATED_HEADERS += trace/generated-tracers-dtrace.h
endif
GENERATED_SOURCES += trace/generated-tracers.c
ifeq ($(TRACE_BACKEND),ust)
GENERATED_HEADERS += trace/generated-ust-provider.h
GENERATED_SOURCES += trace/generated-ust.c
GENERATED_HEADERS += trace-dtrace.h
endif
GENERATED_HEADERS += qmp-commands.h qapi-types.h qapi-visit.h
GENERATED_SOURCES += qmp-marshal.c qapi-types.c qapi-visit.c trace.c
# Don't try to regenerate Makefile or configure
# We don't generate any of them
@@ -77,7 +53,7 @@ LIBS+=-lz $(LIBS_TOOLS)
HELPERS-$(CONFIG_LINUX) = qemu-bridge-helper$(EXESUF)
ifdef BUILD_DOCS
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 qmp-commands.txt
DOCS=qemu-doc.html qemu-tech.html qemu.1 qemu-img.1 qemu-nbd.8 QMP/qmp-commands.txt
ifdef CONFIG_VIRTFS
DOCS+=fsdev/virtfs-proxy-helper.1
endif
@@ -87,17 +63,14 @@ endif
SUBDIR_MAKEFLAGS=$(if $(V),,--no-print-directory) BUILD_DIR=$(BUILD_DIR)
SUBDIR_DEVICES_MAK=$(patsubst %, %/config-devices.mak, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %-config-devices.mak.d, $(TARGET_DIRS))
SUBDIR_DEVICES_MAK_DEP=$(patsubst %, %/config-devices.mak.d, $(TARGET_DIRS))
ifeq ($(SUBDIR_DEVICES_MAK),)
config-all-devices.mak:
$(call quiet-command,echo '# no devices' > $@," GEN $@")
else
config-all-devices.mak: $(SUBDIR_DEVICES_MAK)
$(call quiet-command, sed -n \
's|^\([^=]*\)=\(.*\)$$|\1:=$$(findstring y,$$(\1)\2)|p' \
$(SUBDIR_DEVICES_MAK) | sort -u > $@, \
" GEN $@")
$(call quiet-command,cat $(SUBDIR_DEVICES_MAK) | grep =y | sort -u > $@," GEN $@")
endif
-include $(SUBDIR_DEVICES_MAK_DEP)
@@ -125,28 +98,9 @@ endif
defconfig:
rm -f config-all-devices.mak $(SUBDIR_DEVICES_MAK)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
-include config-all-devices.mak
dummy := $(call unnest-vars,, \
stub-obj-y \
util-obj-y \
qga-obj-y \
qga-vss-dll-obj-y \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
ifeq ($(CONFIG_SMARTCARD_NSS),y)
include $(SRC_PATH)/libcacard/Makefile
endif
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all modules
all: $(DOCS) $(TOOLS) $(HELPERS-y) recurse-all
config-host.h: config-host.h-timestamp
config-host.h-timestamp: config-host.mak
@@ -154,34 +108,30 @@ qemu-options.def: $(SRC_PATH)/qemu-options.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
SUBDIR_RULES=$(patsubst %,subdir-%, $(TARGET_DIRS))
SOFTMMU_SUBDIR_RULES=$(filter %-softmmu,$(SUBDIR_RULES))
$(SOFTMMU_SUBDIR_RULES): $(block-obj-y)
$(SOFTMMU_SUBDIR_RULES): config-all-devices.mak
subdir-%:
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C $* V="$(V)" TARGET_DIR="$*/" all,)
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/Makefile.objs
endif
subdir-libcacard: $(oslib-obj-y) $(trace-obj-y) qemu-timer-common.o
subdir-pixman: pixman/Makefile
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C pixman V="$(V)" all,)
pixman/Makefile: $(SRC_PATH)/pixman/configure
(cd pixman; CFLAGS="$(CFLAGS) -fPIC $(extra_cflags) $(extra_ldflags)" $(SRC_PATH)/pixman/configure $(AUTOCONF_HOST) --disable-gtk --disable-shared --enable-static)
(cd pixman; CFLAGS="$(CFLAGS) -fPIC" $(SRC_PATH)/pixman/configure $(AUTOCONF_HOST) --disable-gtk --disable-shared --enable-static)
$(SRC_PATH)/pixman/configure:
(cd $(SRC_PATH)/pixman; autoreconf -v --install)
DTC_MAKE_ARGS=-I$(SRC_PATH)/dtc VPATH=$(SRC_PATH)/dtc -C dtc V="$(V)" LIBFDT_srcdir=$(SRC_PATH)/dtc/libfdt
DTC_CFLAGS=$(CFLAGS) $(QEMU_CFLAGS)
DTC_CPPFLAGS=-I$(BUILD_DIR)/dtc -I$(SRC_PATH)/dtc -I$(SRC_PATH)/dtc/libfdt
$(SUBDIR_RULES): libqemustub.a
subdir-dtc:dtc/libfdt dtc/tests
$(call quiet-command,$(MAKE) $(DTC_MAKE_ARGS) CPPFLAGS="$(DTC_CPPFLAGS)" CFLAGS="$(DTC_CFLAGS)" LDFLAGS="$(LDFLAGS)" ARFLAGS="$(ARFLAGS)" CC="$(CC)" AR="$(AR)" LD="$(LD)" $(SUBDIR_MAKEFLAGS) libfdt/libfdt.a,)
$(filter %-softmmu,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) $(common-obj-y) $(extra-obj-y) subdir-libdis
dtc/%:
mkdir -p $@
$(SUBDIR_RULES): libqemuutil.a libqemustub.a $(common-obj-y)
$(filter %-user,$(SUBDIR_RULES)): $(universal-obj-y) $(trace-obj-y) subdir-libdis-user subdir-libuser
ROMSUBDIR_RULES=$(patsubst %,romsubdir-%, $(ROMS))
romsubdir-%:
@@ -191,33 +141,66 @@ ALL_SUBDIRS=$(TARGET_DIRS) $(patsubst %,pc-bios/%, $(ROMS))
recurse-all: $(SUBDIR_RULES) $(ROMSUBDIR_RULES)
$(BUILD_DIR)/version.o: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h | $(BUILD_DIR)/version.lo
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.o")
$(BUILD_DIR)/version.lo: $(SRC_PATH)/version.rc $(BUILD_DIR)/config-host.h
$(call quiet-command,$(WINDRES) -I$(BUILD_DIR) -o $@ $<," RC version.lo")
audio/audio.o audio/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
Makefile: $(version-obj-y) $(version-lobj-y)
QEMU_CFLAGS+=$(CURL_CFLAGS)
QEMU_CFLAGS += -I$(SRC_PATH)/include
ui/cocoa.o: ui/cocoa.m
ui/sdl.o audio/sdlaudio.o ui/sdl_zoom.o hw/baum.o: QEMU_CFLAGS += $(SDL_CFLAGS)
ui/vnc.o: QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
bt-host.o: QEMU_CFLAGS += $(BLUEZ_CFLAGS)
version.o: $(SRC_PATH)/version.rc config-host.h
$(call quiet-command,$(WINDRES) -I. -o $@ $<," RC $(TARGET_DIR)$@")
version-obj-$(CONFIG_WIN32) += version.o
######################################################################
# Build libraries
# Build library with stubs
libqemustub.a: $(stub-obj-y)
libqemuutil.a: $(util-obj-y) qapi-types.o qapi-visit.o
block-modules = $(foreach o,$(block-obj-m),"$(basename $(subst /,-,$o))",) NULL
util/module.o-cflags = -D'CONFIG_BLOCK_MODULES=$(block-modules)'
######################################################################
# Support building shared library libcacard
.PHONY: libcacard.la install-libcacard
ifeq ($(LIBTOOL),)
libcacard.la:
@echo "libtool is missing, please install and rerun configure"; exit 1
install-libcacard:
@echo "libtool is missing, please install and rerun configure"; exit 1
else
libcacard.la: $(oslib-obj-y) qemu-timer-common.o $(addsuffix .lo, $(basename $(trace-obj-y)))
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" libcacard.la,)
install-libcacard: libcacard.la
$(call quiet-command,$(MAKE) $(SUBDIR_MAKEFLAGS) -C libcacard V="$(V)" TARGET_DIR="$*/" install-libcacard,)
endif
######################################################################
qemu-img.o: qemu-img-cmds.h
qemu-img$(EXESUF): qemu-img.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(block-obj-y) libqemuutil.a libqemustub.a
qemu-io$(EXESUF): qemu-io.o $(block-obj-y) libqemuutil.a libqemustub.a
tools-obj-y = $(oslib-obj-y) $(trace-obj-y) qemu-tool.o qemu-timer.o \
main-loop.o iohandler.o error.o
tools-obj-$(CONFIG_POSIX) += compatfd.o
qemu-img$(EXESUF): qemu-img.o $(tools-obj-y) $(block-obj-y) libqemustub.a
qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y) $(block-obj-y) libqemustub.a
qemu-io$(EXESUF): qemu-io.o cmd.o $(tools-obj-y) $(block-obj-y) libqemustub.a
qemu-bridge-helper$(EXESUF): qemu-bridge-helper.o
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o libqemuutil.a libqemustub.a
vscclient$(EXESUF): $(libcacard-y) $(oslib-obj-y) $(trace-obj-y) libcacard/vscclient.o libqemustub.a
$(call quiet-command,$(CC) $(LDFLAGS) -o $@ $^ $(libcacard_libs) $(LIBS)," LINK $@")
fsdev/virtfs-proxy-helper$(EXESUF): fsdev/virtfs-proxy-helper.o fsdev/virtio-9p-marshal.o oslib-posix.o $(trace-obj-y)
fsdev/virtfs-proxy-helper$(EXESUF): LIBS += -lcap
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
@@ -228,63 +211,56 @@ qemu-ga$(EXESUF): QEMU_CFLAGS += -I qga/qapi-generated
gen-out-type = $(subst .,-,$(suffix $@))
ifneq ($(wildcard config-host.mak),)
include $(SRC_PATH)/tests/Makefile
endif
qapi-py = $(SRC_PATH)/scripts/qapi.py $(SRC_PATH)/scripts/ordereddict.py
qga/qapi-generated/qga-qapi-types.c qga/qapi-generated/qga-qapi-types.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qapi-visit.c qga/qapi-generated/qga-qapi-visit.h :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qga/qapi-generated/qga-qmp-commands.h qga/qapi-generated/qga-qmp-marshal.c :\
$(SRC_PATH)/qga/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o qga/qapi-generated -p "qga-" -i $<, \
" GEN $@")
$(SRC_PATH)/qapi-schema-guest.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -o qga/qapi-generated -p "qga-" < $<, " GEN $@")
qapi-types.c qapi-types.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-types.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-types.py $(gen-out-type) -o "." < $<, " GEN $@")
qapi-visit.c qapi-visit.h :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-visit.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py \
$(gen-out-type) -o "." -b -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-visit.py $(gen-out-type) -o "." < $<, " GEN $@")
qmp-commands.h qmp-marshal.c :\
$(SRC_PATH)/qapi-schema.json $(SRC_PATH)/scripts/qapi-commands.py $(qapi-py)
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py \
$(gen-out-type) -o "." -m -i $<, \
" GEN $@")
$(call quiet-command,$(PYTHON) $(SRC_PATH)/scripts/qapi-commands.py $(gen-out-type) -m -o "." < $<, " GEN $@")
QGALIB_GEN=$(addprefix qga/qapi-generated/, qga-qapi-types.h qga-qapi-visit.h qga-qmp-commands.h)
$(qga-obj-y) qemu-ga.o: $(QGALIB_GEN)
qemu-ga$(EXESUF): $(qga-obj-y) libqemuutil.a libqemustub.a
$(call LINK, $^)
qemu-ga$(EXESUF): qemu-ga.o $(qga-obj-y) $(oslib-obj-y) $(trace-obj-y) $(qapi-obj-y) $(qobject-obj-y) $(version-obj-y) libqemustub.a
QEMULIBS=libuser libdis libdis-user
clean:
# avoid old build problems by removing potentially incorrect old files
rm -f config.mak op-i386.h opc-i386.h gen-op-i386.h op-arm.h opc-arm.h gen-op-arm.h
rm -f qemu-options.def
find . \( -name '*.l[oa]' -o -name '*.so' -o -name '*.dll' -o -name '*.mo' -o -name '*.[oda]' \) -type f -exec rm {} +
rm -f $(filter-out %.tlb,$(TOOLS)) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -f fsdev/*.pod
rm -rf .libs */.libs
find . -name '*.[od]' -exec rm -f {} +
rm -f *.a *.lo $(TOOLS) $(HELPERS-y) qemu-ga TAGS cscope.* *.pod *~ */*~
rm -Rf .libs
rm -f qemu-img-cmds.h
rm -f trace-dtrace.dtrace trace-dtrace.dtrace-timestamp
@# May not be present in GENERATED_HEADERS
rm -f trace/generated-tracers-dtrace.dtrace*
rm -f trace/generated-tracers-dtrace.h*
rm -f trace-dtrace.h trace-dtrace.h-timestamp
rm -f $(foreach f,$(GENERATED_HEADERS),$(f) $(f)-timestamp)
rm -f $(foreach f,$(GENERATED_SOURCES),$(f) $(f)-timestamp)
rm -rf qapi-generated
rm -rf qga/qapi-generated
for d in $(ALL_SUBDIRS); do \
$(MAKE) -C tests/tcg clean
for d in $(ALL_SUBDIRS) $(QEMULIBS) libcacard; do \
if test -d $$d; then $(MAKE) -C $$d $@ || exit 1; fi; \
rm -f $$d/qemu-options.def; \
done
@@ -298,8 +274,7 @@ qemu-%.tar.bz2:
distclean: clean
rm -f config-host.mak config-host.h* config-host.ld $(DOCS) qemu-options.texi qemu-img-cmds.texi qemu-monitor.texi
rm -f config-all-devices.mak config-all-disas.mak
rm -f po/*.mo
rm -f config-all-devices.mak
rm -f roms/seabios/config.mak roms/vgabios/config.mak
rm -f qemu-doc.info qemu-doc.aux qemu-doc.cp qemu-doc.cps qemu-doc.dvi
rm -f qemu-doc.fn qemu-doc.fns qemu-doc.info qemu-doc.ky qemu-doc.kys
@@ -308,32 +283,26 @@ distclean: clean
rm -f config.log
rm -f linux-headers/asm
rm -f qemu-tech.info qemu-tech.aux qemu-tech.cp qemu-tech.dvi qemu-tech.fn qemu-tech.info qemu-tech.ky qemu-tech.log qemu-tech.pdf qemu-tech.pg qemu-tech.toc qemu-tech.tp qemu-tech.vr
for d in $(TARGET_DIRS); do \
for d in $(TARGET_DIRS) $(QEMULIBS); do \
rm -rf $$d || exit 1 ; \
done
rm -Rf .sdk
if test -f pixman/config.log; then make -C pixman distclean; fi
if test -f dtc/version_gen.h; then make $(DTC_MAKE_ARGS) clean; fi
KEYMAPS=da en-gb et fr fr-ch is lt modifiers no pt-br sv \
ar de en-us fi fr-be hr it lv nl pl ru th \
common de-ch es fo fr-ca hu ja mk nl-be pt sl tr \
bepo cz
bepo
ifdef INSTALL_BLOBS
BLOBS=bios.bin bios-256k.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
BLOBS=bios.bin sgabios.bin vgabios.bin vgabios-cirrus.bin \
vgabios-stdvga.bin vgabios-vmware.bin vgabios-qxl.bin \
acpi-dsdt.aml q35-acpi-dsdt.aml \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc QEMU,tcx.bin QEMU,cgthree.bin \
ppc_rom.bin openbios-sparc32 openbios-sparc64 openbios-ppc \
pxe-e1000.rom pxe-eepro100.rom pxe-ne2k_pci.rom \
pxe-pcnet.rom pxe-rtl8139.rom pxe-virtio.rom \
efi-e1000.rom efi-eepro100.rom efi-ne2k_pci.rom \
efi-pcnet.rom efi-rtl8139.rom efi-virtio.rom \
qemu-icon.bmp qemu_logo_no_text.svg \
qemu-icon.bmp \
bamboo.dtb petalogix-s3adsp1800.dtb petalogix-ml605.dtb \
multiboot.bin linuxboot.bin kvmvapic.bin \
s390-zipl.rom \
s390-ccw.img \
spapr-rtas.bin slof.bin \
palcode-clipper
else
@@ -343,16 +312,13 @@ endif
install-doc: $(DOCS)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qemu-doc.html qemu-tech.html "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
$(INSTALL_DATA) QMP/qmp-commands.txt "$(DESTDIR)$(qemu_docdir)"
ifdef CONFIG_POSIX
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1"
ifneq ($(TOOLS),)
$(INSTALL_DATA) qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) qemu.1 qemu-img.1 "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man8"
$(INSTALL_DATA) qemu-nbd.8 "$(DESTDIR)$(mandir)/man8"
endif
endif
ifdef CONFIG_VIRTFS
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
$(INSTALL_DATA) fsdev/virtfs-proxy-helper.1 "$(DESTDIR)$(mandir)/man1"
@@ -361,57 +327,32 @@ endif
install-datadir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)"
install-localstatedir:
ifdef CONFIG_POSIX
ifneq (,$(findstring qemu-ga,$(TOOLS)))
$(INSTALL_DIR) "$(DESTDIR)$(qemu_localstatedir)"/run
endif
endif
install-confdir:
$(INSTALL_DIR) "$(DESTDIR)$(qemu_confdir)"
install-sysconfig: install-datadir install-confdir
$(INSTALL_DATA) $(SRC_PATH)/sysconfigs/target/target-x86_64.conf "$(DESTDIR)$(qemu_confdir)"
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig \
install-datadir install-localstatedir
install: all $(if $(BUILD_DOCS),install-doc) install-sysconfig install-datadir
$(INSTALL_DIR) "$(DESTDIR)$(bindir)"
ifneq ($(TOOLS),)
$(INSTALL_PROG) $(TOOLS) "$(DESTDIR)$(bindir)"
ifneq ($(STRIP),)
$(STRIP) $(TOOLS:%="$(DESTDIR)$(bindir)/%")
endif
endif
ifneq ($(CONFIG_MODULES),)
$(INSTALL_DIR) "$(DESTDIR)$(qemu_moddir)"
for s in $(modules-m:.mo=$(DSOSUF)); do \
t="$(DESTDIR)$(qemu_moddir)/$$(echo $$s | tr / -)"; \
$(INSTALL_LIB) $$s "$$t"; \
test -z "$(STRIP)" || $(STRIP) "$$t"; \
done
$(INSTALL_PROG) $(STRIP_OPT) $(TOOLS) "$(DESTDIR)$(bindir)"
endif
ifneq ($(HELPERS-y),)
$(INSTALL_DIR) "$(DESTDIR)$(libexecdir)"
$(INSTALL_PROG) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
ifneq ($(STRIP),)
$(STRIP) $(HELPERS-y:%="$(DESTDIR)$(libexecdir)/%")
endif
$(INSTALL_PROG) $(STRIP_OPT) $(HELPERS-y) "$(DESTDIR)$(libexecdir)"
endif
ifneq ($(BLOBS),)
set -e; for x in $(BLOBS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/$$x "$(DESTDIR)$(qemu_datadir)"; \
done
endif
ifeq ($(CONFIG_GTK),y)
$(MAKE) -C po $@
endif
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/keymaps"
set -e; for x in $(KEYMAPS); do \
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
done
for d in $(TARGET_DIRS); do \
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
$(MAKE) -C $$d $@ || exit 1 ; \
done
# various test targets
@@ -420,8 +361,7 @@ test speed: all
.PHONY: TAGS
TAGS:
rm -f $@
find "$(SRC_PATH)" -name '*.[hc]' -exec etags --append {} +
find "$(SRC_PATH)" -name '*.[hc]' -print0 | xargs -0 etags
cscope:
rm -f ./cscope.*
@@ -451,7 +391,7 @@ qemu-options.texi: $(SRC_PATH)/qemu-options.hx
qemu-monitor.texi: $(SRC_PATH)/hmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -t < $< > $@," GEN $@")
qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
QMP/qmp-commands.txt: $(SRC_PATH)/qmp-commands.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -q < $< > $@," GEN $@")
qemu-img-cmds.texi: $(SRC_PATH)/qemu-img-cmds.hx
@@ -490,61 +430,6 @@ qemu-doc.dvi qemu-doc.html qemu-doc.info qemu-doc.pdf: \
qemu-img.texi qemu-nbd.texi qemu-options.texi \
qemu-monitor.texi qemu-img-cmds.texi
ifdef CONFIG_WIN32
INSTALLER = qemu-setup-$(VERSION)$(EXESUF)
nsisflags = -V2 -NOCD
ifneq ($(wildcard $(SRC_PATH)/dll),)
ifeq ($(ARCH),x86_64)
# 64 bit executables
DLL_PATH = $(SRC_PATH)/dll/w64
nsisflags += -DW64
else
# 32 bit executables
DLL_PATH = $(SRC_PATH)/dll/w32
endif
endif
.PHONY: installer
installer: $(INSTALLER)
INSTDIR=/tmp/qemu-nsis
$(INSTALLER): $(SRC_PATH)/qemu.nsi
make install prefix=${INSTDIR}
ifdef SIGNCODE
(cd ${INSTDIR}; \
for i in *.exe; do \
$(SIGNCODE) $${i}; \
done \
)
endif # SIGNCODE
(cd ${INSTDIR}; \
for i in qemu-system-*.exe; do \
arch=$${i%.exe}; \
arch=$${arch#qemu-system-}; \
echo Section \"$$arch\" Section_$$arch; \
echo SetOutPath \"\$$INSTDIR\"; \
echo File \"\$${BINDIR}\\$$i\"; \
echo SectionEnd; \
done \
) >${INSTDIR}/system-emulations.nsh
makensis $(nsisflags) \
$(if $(BUILD_DOCS),-DCONFIG_DOCUMENTATION="y") \
$(if $(CONFIG_GTK),-DCONFIG_GTK="y") \
-DBINDIR="${INSTDIR}" \
$(if $(DLL_PATH),-DDLLDIR="$(DLL_PATH)") \
-DSRCDIR="$(SRC_PATH)" \
-DOUTFILE="$(INSTALLER)" \
$(SRC_PATH)/qemu.nsi
rm -r ${INSTDIR}
ifdef SIGNCODE
$(SIGNCODE) $(INSTALLER)
endif # SIGNCODE
endif # CONFIG_WIN
# Add a dependency on the generated files, so that they are always
# rebuilt before other object files
ifneq ($(filter-out %clean,$(MAKECMDGOALS)),$(if $(MAKECMDGOALS),,fail))

20
Makefile.dis Normal file
View File

@@ -0,0 +1,20 @@
# Makefile for disassemblers.
include ../config-host.mak
include config.mak
include $(SRC_PATH)/rules.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
include $(SRC_PATH)/Makefile.objs
all: $(libdis-y)
# Dummy command so that make thinks it has done something
@true
clean:
rm -f *.o *.d *.a *~

View File

@@ -1,26 +1,212 @@
#######################################################################
# Common libraries for tools and emulators
# Stub library, linked in tools
stub-obj-y = stubs/
util-obj-y = util/ qobject/ qapi/ trace/
#######################################################################
# Target-independent parts used in system and user emulation
universal-obj-y =
universal-obj-y += qemu-log.o
#######################################################################
# QObject
qobject-obj-y = qint.o qstring.o qdict.o qlist.o qfloat.o qbool.o
qobject-obj-y += qjson.o json-lexer.o json-streamer.o json-parser.o
qobject-obj-y += qerror.o error.o qemu-error.o
universal-obj-y += $(qobject-obj-y)
#######################################################################
# QOM
qom-obj-y = qom/
universal-obj-y += $(qom-obj-y)
#######################################################################
# oslib-obj-y is code depending on the OS (win32 vs posix)
oslib-obj-y = osdep.o cutils.o qemu-timer-common.o
oslib-obj-$(CONFIG_WIN32) += oslib-win32.o qemu-thread-win32.o
oslib-obj-$(CONFIG_POSIX) += oslib-posix.o qemu-thread-posix.o
#######################################################################
# coroutines
coroutine-obj-y = qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
coroutine-obj-y += qemu-coroutine-sleep.o
ifeq ($(CONFIG_UCONTEXT_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-ucontext.o
else
ifeq ($(CONFIG_SIGALTSTACK_COROUTINE),y)
coroutine-obj-$(CONFIG_POSIX) += coroutine-sigaltstack.o
else
coroutine-obj-$(CONFIG_POSIX) += coroutine-gthread.o
endif
endif
coroutine-obj-$(CONFIG_WIN32) += coroutine-win32.o
#######################################################################
# block-obj-y is code used by both qemu system emulation and qemu-img
block-obj-y = async.o thread-pool.o
block-obj-y += nbd.o block.o blockjob.o
block-obj-y += main-loop.o iohandler.o qemu-timer.o
block-obj-$(CONFIG_POSIX) += aio-posix.o
block-obj-$(CONFIG_WIN32) += aio-win32.o
block-obj-y = iov.o cache-utils.o qemu-option.o module.o async.o
block-obj-y += nbd.o block.o blockjob.o aes.o qemu-config.o
block-obj-y += thread-pool.o qemu-progress.o qemu-sockets.o uri.o notify.o
block-obj-y += $(coroutine-obj-y) $(qobject-obj-y) $(version-obj-y)
block-obj-$(CONFIG_POSIX) += event_notifier-posix.o aio-posix.o
block-obj-$(CONFIG_WIN32) += event_notifier-win32.o aio-win32.o
block-obj-y += block/
block-obj-y += qapi-types.o qapi-visit.o
block-obj-y += qemu-io-cmds.o
block-obj-y += $(qapi-obj-y) qapi-types.o qapi-visit.o
block-obj-y += qemu-coroutine.o qemu-coroutine-lock.o qemu-coroutine-io.o
block-obj-y += qemu-coroutine-sleep.o
block-obj-y += coroutine-$(CONFIG_COROUTINE_BACKEND).o
ifeq ($(CONFIG_VIRTIO)$(CONFIG_VIRTFS)$(CONFIG_PCI),yyy)
# Lots of the fsdev/9pcode is pulled in by vl.c via qemu_fsdev_add.
# only pull in the actual virtio-9p device if we also enabled virtio.
CONFIG_REALLY_VIRTFS=y
endif
block-obj-m = block/
######################################################################
# Target independent part of system emulation. The long term path is to
# suppress *all* target specific code in case of system emulation, i.e. a
# single QEMU executable should support all CPUs and machines.
common-obj-y = $(block-obj-y) blockdev.o blockdev-nbd.o block/
common-obj-y += net.o net/
common-obj-y += qom/
common-obj-y += readline.o console.o cursor.o
common-obj-y += qemu-pixman.o
common-obj-y += $(oslib-obj-y)
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
extra-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += tcg-runtime.o host-utils.o main-loop.o
common-obj-y += input.o
common-obj-y += buffered_file.o migration.o migration-tcp.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += block-migration.o iohandler.o
common-obj-y += bitmap.o bitops.o
common-obj-y += page_cache.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_WIN32) += version.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
common-obj-y += dma-helpers.o
common-obj-y += acl.o
common-obj-$(CONFIG_POSIX) += compatfd.o
common-obj-y += qemu-timer.o qemu-timer-common.o
common-obj-y += qtest.o
common-obj-y += vl.o
common-obj-$(CONFIG_SLIRP) += slirp/
common-obj-y += backends/
######################################################################
# libseccomp
ifeq ($(CONFIG_SECCOMP),y)
common-obj-y += qemu-seccomp.o
endif
######################################################################
# libuser
user-obj-y =
user-obj-y += envlist.o path.o
user-obj-y += tcg-runtime.o host-utils.o
user-obj-y += cache-utils.o
user-obj-y += module.o
user-obj-y += qemu-user.o
user-obj-y += $(trace-obj-y)
user-obj-y += qom/
######################################################################
# libdis
# NOTE: the disassembler code is only needed for debugging
libdis-y =
libdis-$(CONFIG_ALPHA_DIS) += alpha-dis.o
libdis-$(CONFIG_ARM_DIS) += arm-dis.o
libdis-$(CONFIG_CRIS_DIS) += cris-dis.o
libdis-$(CONFIG_HPPA_DIS) += hppa-dis.o
libdis-$(CONFIG_I386_DIS) += i386-dis.o
libdis-$(CONFIG_IA64_DIS) += ia64-dis.o
libdis-$(CONFIG_M68K_DIS) += m68k-dis.o
libdis-$(CONFIG_MICROBLAZE_DIS) += microblaze-dis.o
libdis-$(CONFIG_MIPS_DIS) += mips-dis.o
libdis-$(CONFIG_PPC_DIS) += ppc-dis.o
libdis-$(CONFIG_S390_DIS) += s390-dis.o
libdis-$(CONFIG_SH4_DIS) += sh4-dis.o
libdis-$(CONFIG_SPARC_DIS) += sparc-dis.o
libdis-$(CONFIG_LM32_DIS) += lm32-dis.o
######################################################################
# trace
ifeq ($(TRACE_BACKEND),dtrace)
TRACE_H_EXTRA_DEPS=trace-dtrace.h
endif
trace.h: trace.h-timestamp $(TRACE_H_EXTRA_DEPS)
trace.h-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=h \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.h")
@cmp -s $@ trace.h || cp $@ trace.h
trace.c: trace.c-timestamp
trace.c-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=c \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace.c")
@cmp -s $@ trace.c || cp $@ trace.c
trace.o: trace.c $(GENERATED_HEADERS)
trace-dtrace.h: trace-dtrace.dtrace
$(call quiet-command,dtrace -o $@ -h -s $<, " GEN trace-dtrace.h")
# Normal practice is to name DTrace probe file with a '.d' extension
# but that gets picked up by QEMU's Makefile as an external dependency
# rule file. So we use '.dtrace' instead
trace-dtrace.dtrace: trace-dtrace.dtrace-timestamp
trace-dtrace.dtrace-timestamp: $(SRC_PATH)/trace-events $(BUILD_DIR)/config-host.mak
$(call quiet-command,$(TRACETOOL) \
--format=d \
--backend=$(TRACE_BACKEND) \
< $< > $@," GEN trace-dtrace.dtrace")
@cmp -s $@ trace-dtrace.dtrace || cp $@ trace-dtrace.dtrace
trace-dtrace.o: trace-dtrace.dtrace $(GENERATED_HEADERS)
$(call quiet-command,dtrace -o $@ -G -s $<, " GEN trace-dtrace.o")
ifeq ($(LIBTOOL),)
trace-dtrace.lo: trace-dtrace.dtrace
@echo "missing libtool. please install and rerun configure."; exit 1
else
trace-dtrace.lo: trace-dtrace.dtrace
$(call quiet-command,$(LIBTOOL) --mode=compile --tag=CC dtrace -o $@ -G -s $<, " lt GEN trace-dtrace.o")
endif
trace/simple.o: trace/simple.c $(GENERATED_HEADERS)
trace-obj-$(CONFIG_TRACE_DTRACE) += trace-dtrace.o
ifneq ($(TRACE_BACKEND),dtrace)
trace-obj-y = trace.o
endif
trace-obj-$(CONFIG_TRACE_DEFAULT) += trace/default.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += trace/simple.o
trace-obj-$(CONFIG_TRACE_SIMPLE) += qemu-timer-common.o
trace-obj-$(CONFIG_TRACE_STDERR) += trace/stderr.o
trace-obj-y += trace/control.o
$(trace-obj-y): $(GENERATED_HEADERS)
######################################################################
# smartcard
@@ -30,86 +216,39 @@ libcacard-y += libcacard/vcard.o libcacard/vreader.o
libcacard-y += libcacard/vcard_emul_nss.o
libcacard-y += libcacard/vcard_emul_type.o
libcacard-y += libcacard/card_7816.o
libcacard-y += libcacard/vcardt.o
libcacard/vcard_emul_nss.o-cflags := $(NSS_CFLAGS)
libcacard/vcard_emul_nss.o-libs := $(NSS_LIBS)
######################################################################
# Target independent part of system emulation. The long term path is to
# suppress *all* target specific code in case of system emulation, i.e. a
# single QEMU executable should support all CPUs and machines.
ifeq ($(CONFIG_SOFTMMU),y)
common-obj-y = blockdev.o blockdev-nbd.o block/
common-obj-y += iothread.o
common-obj-y += net/
common-obj-y += qdev-monitor.o device-hotplug.o
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
common-obj-$(CONFIG_LINUX) += fsdev/
common-obj-y += migration.o migration-tcp.o
common-obj-y += vmstate.o
common-obj-y += qemu-file.o
common-obj-$(CONFIG_RDMA) += migration-rdma.o
common-obj-y += qemu-char.o #aio.o
common-obj-y += block-migration.o
common-obj-y += page_cache.o xbzrle.o
common-obj-$(CONFIG_POSIX) += migration-exec.o migration-unix.o migration-fd.o
common-obj-$(CONFIG_SPICE) += spice-qemu-char.o
common-obj-y += audio/
common-obj-y += hw/
common-obj-y += ui/
common-obj-y += bt-host.o bt-vhci.o
bt-host.o-cflags := $(BLUEZ_CFLAGS)
common-obj-y += dma-helpers.o
common-obj-y += vl.o
vl.o-cflags := $(GPROF_CFLAGS) $(SDL_CFLAGS)
common-obj-y += tpm.o
common-obj-$(CONFIG_SLIRP) += slirp/
common-obj-y += backends/
common-obj-$(CONFIG_SECCOMP) += qemu-seccomp.o
common-obj-$(CONFIG_SMARTCARD_NSS) += $(libcacard-y)
######################################################################
# qapi
common-obj-y += qmp-marshal.o
qapi-obj-y = qapi/
qapi-obj-y += qapi-types.o qapi-visit.o
common-obj-y += qmp-marshal.o qapi-visit.o qapi-types.o
common-obj-y += qmp.o hmp.o
endif
######################################################################
# some qapi visitors are used by both system and user emulation:
common-obj-y += qapi-visit.o qapi-types.o
#######################################################################
# Target-independent parts used in system and user emulation
common-obj-y += qemu-log.o
common-obj-y += tcg-runtime.o
common-obj-y += hw/
common-obj-y += qom/
common-obj-y += disas/
######################################################################
# Resource file for Windows executables
version-obj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.o
version-lobj-$(CONFIG_WIN32) += $(BUILD_DIR)/version.lo
universal-obj-y += $(qapi-obj-y)
######################################################################
# guest agent
# FIXME: a few definitions from qapi-types.o/qapi-visit.o are needed
# by libqemuutil.a. These should be moved to a separate .json schema.
qga-obj-y = qga/ qapi-types.o qapi-visit.o
qga-vss-dll-obj-y = qga/
qga-obj-y = qga/ qemu-ga.o module.o qemu-tool.o
qga-obj-$(CONFIG_POSIX) += qemu-sockets.o qemu-option.o
vl.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
vl.o: QEMU_CFLAGS+=$(SDL_CFLAGS)
QEMU_CFLAGS+=$(GLIB_CFLAGS)
nested-vars += \
stub-obj-y \
qga-obj-y \
qom-obj-y \
qapi-obj-y \
block-obj-y \
user-obj-y \
common-obj-y \
extra-obj-y
dummy := $(call unnest-vars)

View File

@@ -1,8 +1,8 @@
# -*- Mode: makefile -*-
include ../config-host.mak
include config-target.mak
include config-devices.mak
include config-target.mak
include $(SRC_PATH)/rules.mak
$(call set-vpath, $(SRC_PATH))
@@ -15,30 +15,35 @@ QEMU_CFLAGS+=-I$(SRC_PATH)/include
ifdef CONFIG_USER_ONLY
# user emulator name
QEMU_PROG=qemu-$(TARGET_NAME)
QEMU_PROG_BUILD = $(QEMU_PROG)
QEMU_PROG=qemu-$(TARGET_ARCH2)
else
# system emulator name
QEMU_PROG=qemu-system-$(TARGET_NAME)$(EXESUF)
ifneq (,$(findstring -mwindows,$(libs_softmmu)))
ifneq (,$(findstring -mwindows,$(LIBS)))
# Terminate program name with a 'w' because the linker builds a windows executable.
QEMU_PROGW=qemu-system-$(TARGET_NAME)w$(EXESUF)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
QEMU_PROG_BUILD = $(QEMU_PROGW)
else
QEMU_PROG_BUILD = $(QEMU_PROG)
endif
QEMU_PROGW=qemu-system-$(TARGET_ARCH2)w$(EXESUF)
endif # windows executable
QEMU_PROG=qemu-system-$(TARGET_ARCH2)$(EXESUF)
endif
PROGS=$(QEMU_PROG) $(QEMU_PROGW)
PROGS=$(QEMU_PROG)
ifdef QEMU_PROGW
PROGS+=$(QEMU_PROGW)
endif
STPFILES=
ifdef CONFIG_LINUX_USER
PROGS+=$(QEMU_PROG)-binfmt
endif
ifndef CONFIG_HAIKU
LIBS+=-lm
endif
config-target.h: config-target.h-timestamp
config-target.h-timestamp: config-target.mak
ifdef CONFIG_TRACE_SYSTEMTAP
stap: $(QEMU_PROG).stp-installed $(QEMU_PROG).stp
stap: $(QEMU_PROG).stp
ifdef CONFIG_USER_ONLY
TARGET_TYPE=user
@@ -46,24 +51,14 @@ else
TARGET_TYPE=system
endif
$(QEMU_PROG).stp-installed: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp-installed")
$(QEMU_PROG).stp: $(SRC_PATH)/trace-events
$(call quiet-command,$(TRACETOOL) \
--format=stap \
--backend=$(TRACE_BACKEND) \
--binary=$(realpath .)/$(QEMU_PROG) \
--target-name=$(TARGET_NAME) \
--binary=$(bindir)/$(QEMU_PROG) \
--target-arch=$(TARGET_ARCH) \
--target-type=$(TARGET_TYPE) \
< $< > $@," GEN $(TARGET_DIR)$(QEMU_PROG).stp")
< $< > $@," GEN $(QEMU_PROG).stp")
else
stap:
endif
@@ -78,12 +73,13 @@ all: $(PROGS) stap
obj-y = exec.o translate-all.o cpu-exec.o
obj-y += tcg/tcg.o tcg/optimize.o
obj-$(CONFIG_TCG_INTERPRETER) += tci.o
obj-$(CONFIG_TCG_INTERPRETER) += disas/tci.o
obj-y += fpu/softfloat.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-y += disas.o
obj-$(call notempty,$(TARGET_XML_FILES)) += gdbstub-xml.o
obj-$(call lnot,$(CONFIG_KVM)) += kvm-stub.o
obj-$(CONFIG_TCI_DIS) += tci-dis.o
obj-y += target-$(TARGET_BASE_ARCH)/
obj-$(CONFIG_GDBSTUB_XML) += gdbstub-xml.o
tci-dis.o: QEMU_CFLAGS += -I$(SRC_PATH)/tcg -I$(SRC_PATH)/tcg/tci
#########################################################
# Linux user emulator target
@@ -93,7 +89,9 @@ ifdef CONFIG_LINUX_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/linux-user/$(TARGET_ABI_DIR) -I$(SRC_PATH)/linux-user
obj-y += linux-user/
obj-y += gdbstub.o thunk.o user-exec.o
obj-y += gdbstub.o thunk.o user-exec.o $(oslib-obj-y)
obj-binfmt-y += linux-user/
endif #CONFIG_LINUX_USER
@@ -102,39 +100,51 @@ endif #CONFIG_LINUX_USER
ifdef CONFIG_BSD_USER
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ABI_DIR)
QEMU_CFLAGS+=-I$(SRC_PATH)/bsd-user -I$(SRC_PATH)/bsd-user/$(TARGET_ARCH)
obj-y += bsd-user/
obj-y += gdbstub.o user-exec.o
obj-y += gdbstub.o user-exec.o $(oslib-obj-y)
endif #CONFIG_BSD_USER
#########################################################
# System emulator target
ifdef CONFIG_SOFTMMU
CONFIG_NO_PCI = $(if $(subst n,,$(CONFIG_PCI)),n,y)
CONFIG_NO_KVM = $(if $(subst n,,$(CONFIG_KVM)),n,y)
CONFIG_NO_XEN = $(if $(subst n,,$(CONFIG_XEN)),n,y)
CONFIG_NO_GET_MEMORY_MAPPING = $(if $(subst n,,$(CONFIG_HAVE_GET_MEMORY_MAPPING)),n,y)
CONFIG_NO_CORE_DUMP = $(if $(subst n,,$(CONFIG_HAVE_CORE_DUMP)),n,y)
obj-y += arch_init.o cpus.o monitor.o gdbstub.o balloon.o ioport.o
obj-y += qtest.o
obj-y += hw/
obj-$(CONFIG_FDT) += device_tree.o
obj-$(CONFIG_KVM) += kvm-all.o
obj-$(CONFIG_NO_KVM) += kvm-stub.o
obj-y += memory.o savevm.o cputlb.o
obj-y += memory_mapping.o
obj-y += dump.o
LIBS+=$(libs_softmmu)
obj-$(CONFIG_HAVE_GET_MEMORY_MAPPING) += memory_mapping.o
obj-$(CONFIG_HAVE_CORE_DUMP) += dump.o
obj-$(CONFIG_NO_GET_MEMORY_MAPPING) += memory_mapping-stub.o
obj-$(CONFIG_NO_CORE_DUMP) += dump-stub.o
LIBS+=-lz
QEMU_CFLAGS += $(VNC_TLS_CFLAGS)
QEMU_CFLAGS += $(VNC_SASL_CFLAGS)
QEMU_CFLAGS += $(VNC_JPEG_CFLAGS)
QEMU_CFLAGS += $(VNC_PNG_CFLAGS)
# xen support
obj-$(CONFIG_XEN) += xen-common.o
obj-$(CONFIG_XEN_I386) += xen-hvm.o xen-mapcache.o
obj-$(call lnot,$(CONFIG_XEN)) += xen-common-stub.o
obj-$(call lnot,$(CONFIG_XEN_I386)) += xen-hvm-stub.o
obj-$(CONFIG_XEN) += xen-all.o xen-mapcache.o
obj-$(CONFIG_NO_XEN) += xen-stub.o
# Hardware support
ifeq ($(TARGET_NAME), sparc64)
ifeq ($(TARGET_ARCH), sparc64)
obj-y += hw/sparc64/
else
obj-y += hw/$(TARGET_BASE_ARCH)/
endif
main.o: QEMU_CFLAGS+=$(GPROF_CFLAGS)
GENERATED_HEADERS += hmp-commands.h qmp-commands-old.h
endif # CONFIG_SOFTMMU
@@ -142,26 +152,38 @@ endif # CONFIG_SOFTMMU
# Workaround for http://gcc.gnu.org/PR55489, see configure.
%/translate.o: QEMU_CFLAGS += $(TRANSLATE_OPT_CFLAGS)
dummy := $(call unnest-vars,,obj-y)
all-obj-y := $(obj-y)
block-obj-y :=
common-obj-y :=
include $(SRC_PATH)/Makefile.objs
dummy := $(call unnest-vars,.., \
block-obj-y \
block-obj-m \
common-obj-y \
common-obj-m)
all-obj-y += $(common-obj-y)
all-obj-$(CONFIG_SOFTMMU) += $(block-obj-y)
ifndef CONFIG_HAIKU
LIBS+=-lm
nested-vars += obj-y
ifdef CONFIG_LINUX_USER
nested-vars += obj-binfmt-y
endif
# build either PROG or PROGW
$(QEMU_PROG_BUILD): $(all-obj-y) ../libqemuutil.a ../libqemustub.a
# This resolves all nested paths, so it must come last
include $(SRC_PATH)/Makefile.objs
all-obj-y = $(obj-y)
all-obj-y += $(addprefix ../, $(universal-obj-y))
ifdef CONFIG_SOFTMMU
all-obj-y += $(addprefix ../, $(common-obj-y))
all-obj-y += $(addprefix ../libdis/, $(libdis-y))
all-obj-y += $(addprefix ../, $(trace-obj-y))
else
all-obj-y += $(addprefix ../libuser/, $(user-obj-y))
all-obj-y += $(addprefix ../libdis-user/, $(libdis-y))
endif #CONFIG_LINUX_USER
ifdef QEMU_PROGW
# The linker builds a windows executable. Make also a console executable.
$(QEMU_PROGW): $(all-obj-y) ../libqemustub.a
$(call LINK,$^)
$(QEMU_PROG): $(QEMU_PROGW)
$(call quiet-command,$(OBJCOPY) --subsystem console $(QEMU_PROGW) $(QEMU_PROG)," GEN $(TARGET_DIR)$(QEMU_PROG)")
else
$(QEMU_PROG): $(all-obj-y) ../libqemustub.a
$(call LINK,$^)
endif
$(QEMU_PROG)-binfmt: $(obj-binfmt-y)
$(call LINK,$^)
gdbstub-xml.c: $(TARGET_XML_FILES) $(SRC_PATH)/scripts/feature_to_c.sh
@@ -183,14 +205,14 @@ endif
install: all
ifneq ($(PROGS),)
$(INSTALL_PROG) $(PROGS) "$(DESTDIR)$(bindir)"
$(INSTALL) -m 755 $(PROGS) "$(DESTDIR)$(bindir)"
ifneq ($(STRIP),)
$(STRIP) $(PROGS:%="$(DESTDIR)$(bindir)/%")
$(STRIP) $(patsubst %,"$(DESTDIR)$(bindir)/%",$(PROGS))
endif
endif
ifdef CONFIG_TRACE_SYSTEMTAP
$(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
$(INSTALL_DATA) $(QEMU_PROG).stp-installed "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset/$(QEMU_PROG).stp"
$(INSTALL_DATA) $(QEMU_PROG).stp "$(DESTDIR)$(qemu_datadir)/../systemtap/tapset"
endif
GENERATED_HEADERS += config-target.h

24
Makefile.user Normal file
View File

@@ -0,0 +1,24 @@
# Makefile for qemu target independent user files.
include ../config-host.mak
include $(SRC_PATH)/rules.mak
-include config.mak
.PHONY: all
$(call set-vpath, $(SRC_PATH))
QEMU_CFLAGS+=-I..
QEMU_CFLAGS += -I$(SRC_PATH)/include
QEMU_CFLAGS += -DCONFIG_USER_ONLY
include $(SRC_PATH)/Makefile.objs
all: $(user-obj-y)
# Dummy command so that make thinks it has done something
@true
clean:
for d in . trace; do \
rm -f $$d/*.o $$d/*.d $$d/*.a $$d/*~; \
done

88
QMP/README Normal file
View File

@@ -0,0 +1,88 @@
QEMU Monitor Protocol
=====================
Introduction
-------------
The QEMU Monitor Protocol (QMP) allows applications to communicate with
QEMU's Monitor.
QMP is JSON[1] based and currently has the following features:
- Lightweight, text-based, easy to parse data format
- Asynchronous messages support (ie. events)
- Capabilities Negotiation
For detailed information on QMP's usage, please, refer to the following files:
o qmp-spec.txt QEMU Monitor Protocol current specification
o qmp-commands.txt QMP supported commands (auto-generated at build-time)
o qmp-events.txt List of available asynchronous events
There is also a simple Python script called 'qmp-shell' available.
IMPORTANT: It's strongly recommended to read the 'Stability Considerations'
section in the qmp-commands.txt file before making any serious use of QMP.
[1] http://www.json.org
Usage
-----
To enable QMP, you need a QEMU monitor instance in "control mode". There are
two ways of doing this.
The simplest one is using the '-qmp' command-line option. The following
example makes QMP available on localhost port 4444:
$ qemu [...] -qmp tcp:localhost:4444,server
However, in order to have more complex combinations, like multiple monitors,
the '-mon' command-line option should be used along with the '-chardev' one.
For instance, the following example creates one user monitor on stdio and one
QMP monitor on localhost port 4444.
$ qemu [...] -chardev stdio,id=mon0 -mon chardev=mon0,mode=readline \
-chardev socket,id=mon1,host=localhost,port=4444,server \
-mon chardev=mon1,mode=control
Please, refer to QEMU's manpage for more information.
Simple Testing
--------------
To manually test QMP one can connect with telnet and issue commands by hand:
$ telnet localhost 4444
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}, "capabilities": []}}
{ "execute": "qmp_capabilities" }
{"return": {}}
{ "execute": "query-version" }
{"return": {"qemu": {"micro": 50, "minor": 13, "major": 0}, "package": ""}}
Development Process
-------------------
When changing QMP's interface (by adding new commands, events or modifying
existing ones) it's mandatory to update the relevant documentation, which is
one (or more) of the files listed in the 'Introduction' section*.
Also, it's strongly recommended to send the documentation patch first, before
doing any code change. This is so because:
1. Avoids the code dictating the interface
2. Review can improve your interface. Letting that happen before
you implement it can save you work.
* The qmp-commands.txt file is generated from the qmp-commands.hx one, which
is the file that should be edited.
Homepage
--------
http://wiki.qemu.org/QMP

View File

@@ -33,7 +33,7 @@
# $ qemu-ga-client fsfreeze freeze
# 2 filesystems frozen
#
# See also: http://wiki.qemu-project.org/Features/QAPI/GuestAgent
# See also: http://wiki.qemu.org/Features/QAPI/GuestAgent
#
import base64
@@ -267,9 +267,7 @@ def main(address, cmd, args):
print('Hint: qemu is not running?')
sys.exit(1)
if cmd == 'fsfreeze' and args[0] == 'freeze':
client.sync(60)
elif cmd != 'ping':
if cmd != 'ping':
client.sync()
globals()['_cmd_' + cmd](client, args)

View File

@@ -1,4 +1,4 @@
QEMU Machine Protocol Events
QEMU Monitor Protocol Events
============================
BALLOON_CHANGE
@@ -18,28 +18,6 @@ Example:
"data": { "actual": 944766976 },
"timestamp": { "seconds": 1267020223, "microseconds": 435656 } }
BLOCK_IMAGE_CORRUPTED
---------------------
Emitted when a disk image is being marked corrupt.
Data:
- "device": Device name (json-string)
- "msg": Informative message (e.g., reason for the corruption) (json-string)
- "offset": If the corruption resulted from an image access, this is the access
offset into the image (json-int)
- "size": If the corruption resulted from an image access, this is the access
size (json-int)
Example:
{ "event": "BLOCK_IMAGE_CORRUPTED",
"data": { "device": "ide0-hd0",
"msg": "Prevented active L1 table overwrite", "offset": 196608,
"size": 65536 },
"timestamp": { "seconds": 1378126126, "microseconds": 966463 } }
BLOCK_IO_ERROR
--------------
@@ -158,24 +136,6 @@ Example:
Note: The "ready to complete" status is always reset by a BLOCK_JOB_ERROR
event.
DEVICE_DELETED
--------------
Emitted whenever the device removal completion is acknowledged
by the guest.
At this point, it's safe to reuse the specified device ID.
Device removal can be initiated by the guest or by HMP/QMP commands.
Data:
- "device": device name (json-string, optional)
- "path": device path (json-string)
{ "event": "DEVICE_DELETED",
"data": { "device": "virtio-net-pci-0",
"path": "/machine/peripheral/virtio-net-pci-0" },
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
DEVICE_TRAY_MOVED
-----------------
@@ -194,76 +154,6 @@ Data:
},
"timestamp": { "seconds": 1265044230, "microseconds": 450486 } }
GUEST_PANICKED
--------------
Emitted when guest OS panic is detected.
Data:
- "action": Action that has been taken (json-string, currently always "pause").
Example:
{ "event": "GUEST_PANICKED",
"data": { "action": "pause" } }
NIC_RX_FILTER_CHANGED
---------------------
The event is emitted once until the query command is executed,
the first event will always be emitted.
Data:
- "name": net client name (json-string)
- "path": device path (json-string)
{ "event": "NIC_RX_FILTER_CHANGED",
"data": { "name": "vnet0",
"path": "/machine/peripheral/vnet0/virtio-backend" },
"timestamp": { "seconds": 1368697518, "microseconds": 326866 } }
}
QUORUM_FAILURE
--------------
Emitted by the Quorum block driver if it fails to establish a quorum.
Data:
- "reference": device name if defined else node name.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_FAILURE",
"data": { "reference": "usr1", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
QUORUM_REPORT_BAD
-----------------
Emitted to report a corruption of a Quorum file.
Data:
- "error": Error message (json-string, optional)
Only present on failure. This field contains a human-readable
error message. There are no semantics other than that the
block layer reported an error and clients should not try to
interpret the error string.
- "node-name": The graph node name of the block driver state.
- "sector-num": Number of the first sector of the failed read operation.
- "sector-count": Failed read operation sector count.
Example:
{ "event": "QUORUM_REPORT_BAD",
"data": { "node-name": "1.raw", "sector-num": 345435, "sector-count": 5 },
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
RESET
-----
@@ -295,8 +185,7 @@ Emitted when the guest changes the RTC time.
Data:
- "offset": Offset between base RTC clock (as specified by -rtc base), and
new RTC clock value (json-number)
- "offset": delta against the host UTC in seconds (json-number)
Example:
@@ -518,7 +407,7 @@ Data: None.
Example:
{ "event": "WAKEUP",
{ "event": "WATCHDOG",
"timestamp": { "seconds": 1344522075, "microseconds": 745528 } }
WATCHDOG

View File

@@ -31,7 +31,6 @@
# (QEMU)
import qmp
import json
import readline
import sys
import pprint
@@ -92,7 +91,7 @@ class QMPShell(qmp.QEMUMonitorProtocol):
"""
Build a QMP input object from a user provided command-line in the
following format:
< command-name > [ arg-name1=arg1 ] ... [ arg-nameN=argN ]
"""
cmdargs = cmdline.split()
@@ -100,41 +99,16 @@ class QMPShell(qmp.QEMUMonitorProtocol):
for arg in cmdargs[1:]:
opt = arg.split('=')
try:
if(len(opt) > 2):
opt[1] = '='.join(opt[1:])
value = int(opt[1])
except ValueError:
if opt[1] == 'true':
value = True
elif opt[1] == 'false':
value = False
elif opt[1].startswith('{'):
value = json.loads(opt[1])
else:
value = opt[1]
optpath = opt[0].split('.')
parent = qmpcmd['arguments']
curpath = []
for p in optpath[:-1]:
curpath.append(p)
d = parent.get(p, {})
if type(d) is not dict:
raise QMPShellError('Cannot use "%s" as both leaf and non-leaf key' % '.'.join(curpath))
parent[p] = d
parent = d
if optpath[-1] in parent:
if type(parent[optpath[-1]]) is dict:
raise QMPShellError('Cannot use "%s" as both leaf and non-leaf key' % '.'.join(curpath))
else:
raise QMPShellError('Cannot set "%s" multiple times' % opt[0])
parent[optpath[-1]] = value
value = opt[1]
qmpcmd['arguments'][opt[0]] = value
return qmpcmd
def _execute_cmd(self, cmdline):
try:
qmpcmd = self.__build_cmd(cmdline)
except Exception, e:
print 'Error while parsing command line: %s' % e
except:
print 'command format: <command-name> ',
print '[arg-name1=arg1] ... [arg-nameN=argN]'
return True

View File

@@ -1,17 +1,21 @@
QEMU Machine Protocol Specification
QEMU Monitor Protocol Specification - Version 0.1
1. Introduction
===============
This document specifies the QEMU Machine Protocol (QMP), a JSON-based protocol
which is available for applications to operate QEMU at the machine-level.
This document specifies the QEMU Monitor Protocol (QMP), a JSON-based protocol
which is available for applications to control QEMU at the machine-level.
To enable QMP support, QEMU has to be run in "control mode". This is done by
starting QEMU with the appropriate command-line options. Please, refer to the
QEMU manual page for more information.
2. Protocol Specification
=========================
This section details the protocol format. For the purpose of this document
"Client" is any application which is using QMP to communicate with QEMU and
"Server" is QEMU itself.
"Client" is any application which is communicating with QEMU in control mode,
and "Server" is QEMU itself.
JSON data structures, when mentioned in this document, are always in the
following format:
@@ -43,14 +47,14 @@ that the connection has been successfully established and that the Server is
ready for capabilities negotiation (for more information refer to section
'4. Capabilities Negotiation').
The greeting message format is:
The format is:
{ "QMP": { "version": json-object, "capabilities": json-array } }
Where,
- The "version" member contains the Server's version information (the format
is the same of the query-version command)
is the same of the 'query-version' command)
- The "capabilities" member specify the availability of features beyond the
baseline specification
@@ -79,7 +83,10 @@ of a command execution: success or error.
2.4.1 success
-------------
The format of a success response is:
The success response is issued when the command execution has finished
without errors.
The format is:
{ "return": json-object, "id": json-value }
@@ -89,12 +96,15 @@ The format of a success response is:
in a per-command basis or an empty json-object if the command does not
return data
- The "id" member contains the transaction identification associated
with the command execution if issued by the Client
with the command execution (if issued by the Client)
2.4.2 error
-----------
The format of an error response is:
The error response is issued when the command execution could not be
completed because of an error condition.
The format is:
{ "error": { "class": json-string, "desc": json-string }, "id": json-value }
@@ -104,7 +114,7 @@ The format of an error response is:
- The "desc" member is a human-readable error message. Clients should
not attempt to parse this message.
- The "id" member contains the transaction identification associated with
the command execution if issued by the Client
the command execution (if issued by the Client)
NOTE: Some errors can occur before the Server is able to read the "id" member,
in these cases the "id" member will not be part of the error response, even
@@ -114,9 +124,9 @@ if provided by the client.
-----------------------
As a result of state changes, the Server may send messages unilaterally
to the Client at any time. They are called "asynchronous events".
to the Client at any time. They are called 'asynchronous events'.
The format of asynchronous events is:
The format is:
{ "event": json-string, "data": json-object,
"timestamp": { "seconds": json-number, "microseconds": json-number } }
@@ -137,37 +147,36 @@ qmp-events.txt file.
===============
This section provides some examples of real QMP usage, in all of them
"C" stands for "Client" and "S" stands for "Server".
'C' stands for 'Client' and 'S' stands for 'Server'.
3.1 Server greeting
-------------------
S: { "QMP": { "version": { "qemu": { "micro": 50, "minor": 6, "major": 1 },
"package": ""}, "capabilities": []}}
S: {"QMP": {"version": {"qemu": "0.12.50", "package": ""}, "capabilities": []}}
3.2 Simple 'stop' execution
---------------------------
C: { "execute": "stop" }
S: { "return": {} }
S: {"return": {}}
3.3 KVM information
-------------------
C: { "execute": "query-kvm", "id": "example" }
S: { "return": { "enabled": true, "present": true }, "id": "example"}
S: {"return": {"enabled": true, "present": true}, "id": "example"}
3.4 Parsing error
------------------
C: { "execute": }
S: { "error": { "class": "GenericError", "desc": "Invalid JSON syntax" } }
S: {"error": {"class": "GenericError", "desc": "Invalid JSON syntax" } }
3.5 Powerdown event
-------------------
S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
"event": "POWERDOWN" }
S: {"timestamp": {"seconds": 1258551470, "microseconds": 802384}, "event":
"POWERDOWN"}
4. Capabilities Negotiation
----------------------------
@@ -175,17 +184,17 @@ S: { "timestamp": { "seconds": 1258551470, "microseconds": 802384 },
When a Client successfully establishes a connection, the Server is in
Capabilities Negotiation mode.
In this mode only the qmp_capabilities command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous
messages are not delivered either.
In this mode only the 'qmp_capabilities' command is allowed to run, all
other commands will return the CommandNotFound error. Asynchronous messages
are not delivered either.
Clients should use the qmp_capabilities command to enable capabilities
Clients should use the 'qmp_capabilities' command to enable capabilities
advertised in the Server's greeting (section '2.2 Server Greeting') they
support.
When the qmp_capabilities command is issued, and if it does not return an
When the 'qmp_capabilities' command is issued, and if it does not return an
error, the Server enters in Command mode where capabilities changes take
effect, all commands (except qmp_capabilities) are allowed and asynchronous
effect, all commands (except 'qmp_capabilities') are allowed and asynchronous
messages are delivered.
5 Compatibility Considerations
@@ -236,7 +245,7 @@ arguments, errors, asynchronous events, and so forth.
Any new names downstream wishes to add must begin with '__'. To
ensure compatibility with other downstreams, it is strongly
recommended that you prefix your downstream names with '__RFQDN_' where
recommended that you prefix your downstram names with '__RFQDN_' where
RFQDN is a valid, reverse fully qualified domain name which you
control. For example, a qemu-kvm specific monitor command would be:

View File

@@ -1,5 +1,5 @@
# QEMU Monitor Protocol Python class
#
#
# Copyright (C) 2009, 2010 Red Hat Inc.
#
# Authors:
@@ -171,12 +171,7 @@ class QEMUMonitorProtocol:
pass
self.__sock.setblocking(1)
if not self.__events and wait:
ret = self.__json_read(only_event=True)
if ret == None:
# We are in blocking mode, if don't get anything, something
# went wrong
raise QMPConnectError("Error while reading from socket")
self.__json_read(only_event=True)
return self.__events
def clear_events(self):
@@ -193,9 +188,3 @@ class QEMUMonitorProtocol:
def settimeout(self, timeout):
self.__sock.settimeout(timeout)
def get_sock_fd(self):
return self.__sock.fileno()
def is_scm_available(self):
return self.__sock.family == socket.AF_UNIX

2
README
View File

@@ -1,3 +1,3 @@
Read the documentation in qemu-doc.html or on http://wiki.qemu-project.org
Read the documentation in qemu-doc.html or on http://wiki.qemu.org
- QEMU team

37
TODO Normal file
View File

@@ -0,0 +1,37 @@
General:
-------
- cycle counter for all archs
- cpu_interrupt() win32/SMP fix
- merge PIC spurious interrupt patch
- warning for OS/2: must not use 128 MB memory (merge bochs cmos patch ?)
- config file (at least for windows/Mac OS X)
- update doc: PCI infos.
- basic VGA optimizations
- better code fetch
- do not resize vga if invalid size.
- TLB code protection support for PPC
- disable SMC handling for ARM/SPARC/PPC (not finished)
- see undefined flags for BTx insn
- keyboard output buffer filling timing emulation
- tests for each target CPU
- fix all remaining thread lock issues (must put TBs in a specific invalid
state, find a solution for tb_flush()).
ppc specific:
------------
- TLB invalidate not needed if msr_pr changes
- enable shift optimizations ?
linux-user specific:
-------------------
- remove threading support as it cannot work at this point
- improve IPC syscalls
- more syscalls (in particular all 64 bit ones, IPCs, fix 64 bit
issues, fix 16 bit uid issues)
- use kernel traps for unaligned accesses on ARM ?
lower priority:
--------------
- int15 ah=86: use better timing
- use -msoft-float on ARM

View File

@@ -1 +1 @@
2.0.50
1.3.1

430
a.out.h Normal file
View File

@@ -0,0 +1,430 @@
/* a.out.h
Copyright 1997, 1998, 1999, 2001 Red Hat, Inc.
This file is part of Cygwin.
This software is a copyrighted work licensed under the terms of the
Cygwin license. Please consult the file "CYGWIN_LICENSE" for
details. */
#ifndef _A_OUT_H_
#define _A_OUT_H_
#ifdef __cplusplus
extern "C" {
#endif
#define COFF_IMAGE_WITH_PE
#define COFF_LONG_SECTION_NAMES
/*** coff information for Intel 386/486. */
/********************** FILE HEADER **********************/
struct external_filehdr {
short f_magic; /* magic number */
short f_nscns; /* number of sections */
host_ulong f_timdat; /* time & date stamp */
host_ulong f_symptr; /* file pointer to symtab */
host_ulong f_nsyms; /* number of symtab entries */
short f_opthdr; /* sizeof(optional hdr) */
short f_flags; /* flags */
};
/* Bits for f_flags:
* F_RELFLG relocation info stripped from file
* F_EXEC file is executable (no unresolved external references)
* F_LNNO line numbers stripped from file
* F_LSYMS local symbols stripped from file
* F_AR32WR file has byte ordering of an AR32WR machine (e.g. vax)
*/
#define F_RELFLG (0x0001)
#define F_EXEC (0x0002)
#define F_LNNO (0x0004)
#define F_LSYMS (0x0008)
#define I386MAGIC 0x14c
#define I386PTXMAGIC 0x154
#define I386AIXMAGIC 0x175
/* This is Lynx's all-platform magic number for executables. */
#define LYNXCOFFMAGIC 0415
#define I386BADMAG(x) (((x).f_magic != I386MAGIC) \
&& (x).f_magic != I386AIXMAGIC \
&& (x).f_magic != I386PTXMAGIC \
&& (x).f_magic != LYNXCOFFMAGIC)
#define FILHDR struct external_filehdr
#define FILHSZ 20
/********************** AOUT "OPTIONAL HEADER"=
**********************/
typedef struct
{
unsigned short magic; /* type of file */
unsigned short vstamp; /* version stamp */
host_ulong tsize; /* text size in bytes, padded to FW bdry*/
host_ulong dsize; /* initialized data " " */
host_ulong bsize; /* uninitialized data " " */
host_ulong entry; /* entry pt. */
host_ulong text_start; /* base of text used for this file */
host_ulong data_start; /* base of data used for this file=
*/
}
AOUTHDR;
#define AOUTSZ 28
#define AOUTHDRSZ 28
#define OMAGIC 0404 /* object files, eg as output */
#define ZMAGIC 0413 /* demand load format, eg normal ld output */
#define STMAGIC 0401 /* target shlib */
#define SHMAGIC 0443 /* host shlib */
/* define some NT default values */
/* #define NT_IMAGE_BASE 0x400000 moved to internal.h */
#define NT_SECTION_ALIGNMENT 0x1000
#define NT_FILE_ALIGNMENT 0x200
#define NT_DEF_RESERVE 0x100000
#define NT_DEF_COMMIT 0x1000
/********************** SECTION HEADER **********************/
struct external_scnhdr {
char s_name[8]; /* section name */
host_ulong s_paddr; /* physical address, offset
of last addr in scn */
host_ulong s_vaddr; /* virtual address */
host_ulong s_size; /* section size */
host_ulong s_scnptr; /* file ptr to raw data for section */
host_ulong s_relptr; /* file ptr to relocation */
host_ulong s_lnnoptr; /* file ptr to line numbers */
unsigned short s_nreloc; /* number of relocation entries */
unsigned short s_nlnno; /* number of line number entries*/
host_ulong s_flags; /* flags */
};
#define SCNHDR struct external_scnhdr
#define SCNHSZ 40
/*
* names of "special" sections
*/
#define _TEXT ".text"
#define _DATA ".data"
#define _BSS ".bss"
#define _COMMENT ".comment"
#define _LIB ".lib"
/********************** LINE NUMBERS **********************/
/* 1 line number entry for every "breakpointable" source line in a section.
* Line numbers are grouped on a per function basis; first entry in a function
* grouping will have l_lnno = 0 and in place of physical address will be the
* symbol table index of the function name.
*/
struct external_lineno {
union {
host_ulong l_symndx; /* function name symbol index, iff l_lnno 0 */
host_ulong l_paddr; /* (physical) address of line number */
} l_addr;
unsigned short l_lnno; /* line number */
};
#define LINENO struct external_lineno
#define LINESZ 6
/********************** SYMBOLS **********************/
#define E_SYMNMLEN 8 /* # characters in a symbol name */
#define E_FILNMLEN 14 /* # characters in a file name */
#define E_DIMNUM 4 /* # array dimensions in auxiliary entry */
struct QEMU_PACKED external_syment
{
union {
char e_name[E_SYMNMLEN];
struct {
host_ulong e_zeroes;
host_ulong e_offset;
} e;
} e;
host_ulong e_value;
unsigned short e_scnum;
unsigned short e_type;
char e_sclass[1];
char e_numaux[1];
};
#define N_BTMASK (0xf)
#define N_TMASK (0x30)
#define N_BTSHFT (4)
#define N_TSHIFT (2)
union external_auxent {
struct {
host_ulong x_tagndx; /* str, un, or enum tag indx */
union {
struct {
unsigned short x_lnno; /* declaration line number */
unsigned short x_size; /* str/union/array size */
} x_lnsz;
host_ulong x_fsize; /* size of function */
} x_misc;
union {
struct { /* if ISFCN, tag, or .bb */
host_ulong x_lnnoptr;/* ptr to fcn line # */
host_ulong x_endndx; /* entry ndx past block end */
} x_fcn;
struct { /* if ISARY, up to 4 dimen. */
char x_dimen[E_DIMNUM][2];
} x_ary;
} x_fcnary;
unsigned short x_tvndx; /* tv index */
} x_sym;
union {
char x_fname[E_FILNMLEN];
struct {
host_ulong x_zeroes;
host_ulong x_offset;
} x_n;
} x_file;
struct {
host_ulong x_scnlen; /* section length */
unsigned short x_nreloc; /* # relocation entries */
unsigned short x_nlinno; /* # line numbers */
host_ulong x_checksum; /* section COMDAT checksum */
unsigned short x_associated;/* COMDAT associated section index */
char x_comdat[1]; /* COMDAT selection number */
} x_scn;
struct {
host_ulong x_tvfill; /* tv fill value */
unsigned short x_tvlen; /* length of .tv */
char x_tvran[2][2]; /* tv range */
} x_tv; /* info about .tv section (in auxent of symbol .tv)) */
};
#define SYMENT struct external_syment
#define SYMESZ 18
#define AUXENT union external_auxent
#define AUXESZ 18
#define _ETEXT "etext"
/********************** RELOCATION DIRECTIVES **********************/
struct external_reloc {
char r_vaddr[4];
char r_symndx[4];
char r_type[2];
};
#define RELOC struct external_reloc
#define RELSZ 10
/* end of coff/i386.h */
/* PE COFF header information */
#ifndef _PE_H
#define _PE_H
/* NT specific file attributes */
#define IMAGE_FILE_RELOCS_STRIPPED 0x0001
#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002
#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004
#define IMAGE_FILE_LOCAL_SYMS_STRIPPED 0x0008
#define IMAGE_FILE_BYTES_REVERSED_LO 0x0080
#define IMAGE_FILE_32BIT_MACHINE 0x0100
#define IMAGE_FILE_DEBUG_STRIPPED 0x0200
#define IMAGE_FILE_SYSTEM 0x1000
#define IMAGE_FILE_DLL 0x2000
#define IMAGE_FILE_BYTES_REVERSED_HI 0x8000
/* additional flags to be set for section headers to allow the NT loader to
read and write to the section data (to replace the addresses of data in
dlls for one thing); also to execute the section in .text's case=
*/
#define IMAGE_SCN_MEM_DISCARDABLE 0x02000000
#define IMAGE_SCN_MEM_EXECUTE 0x20000000
#define IMAGE_SCN_MEM_READ 0x40000000
#define IMAGE_SCN_MEM_WRITE 0x80000000
/*
* Section characteristics added for ppc-nt
*/
#define IMAGE_SCN_TYPE_NO_PAD 0x00000008 /* Reserved. */
#define IMAGE_SCN_CNT_CODE 0x00000020 /* Section contains code. */
#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040 /* Section contains initialized data. */
#define IMAGE_SCN_CNT_UNINITIALIZED_DATA 0x00000080 /* Section contains uninitialized data. */
#define IMAGE_SCN_LNK_OTHER 0x00000100 /* Reserved. */
#define IMAGE_SCN_LNK_INFO 0x00000200 /* Section contains comments or some other type of information. */
#define IMAGE_SCN_LNK_REMOVE 0x00000800 /* Section contents will not become part of image. */
#define IMAGE_SCN_LNK_COMDAT 0x00001000 /* Section contents comdat. */
#define IMAGE_SCN_MEM_FARDATA 0x00008000
#define IMAGE_SCN_MEM_PURGEABLE 0x00020000
#define IMAGE_SCN_MEM_16BIT 0x00020000
#define IMAGE_SCN_MEM_LOCKED 0x00040000
#define IMAGE_SCN_MEM_PRELOAD 0x00080000
#define IMAGE_SCN_ALIGN_1BYTES 0x00100000
#define IMAGE_SCN_ALIGN_2BYTES 0x00200000
#define IMAGE_SCN_ALIGN_4BYTES 0x00300000
#define IMAGE_SCN_ALIGN_8BYTES 0x00400000
#define IMAGE_SCN_ALIGN_16BYTES 0x00500000 /* Default alignment if no others are specified. */
#define IMAGE_SCN_ALIGN_32BYTES 0x00600000
#define IMAGE_SCN_ALIGN_64BYTES 0x00700000
#define IMAGE_SCN_LNK_NRELOC_OVFL 0x01000000 /* Section contains extended relocations. */
#define IMAGE_SCN_MEM_NOT_CACHED 0x04000000 /* Section is not cachable. */
#define IMAGE_SCN_MEM_NOT_PAGED 0x08000000 /* Section is not pageable. */
#define IMAGE_SCN_MEM_SHARED 0x10000000 /* Section is shareable. */
/* COMDAT selection codes. */
#define IMAGE_COMDAT_SELECT_NODUPLICATES (1) /* Warn if duplicates. */
#define IMAGE_COMDAT_SELECT_ANY (2) /* No warning. */
#define IMAGE_COMDAT_SELECT_SAME_SIZE (3) /* Warn if different size. */
#define IMAGE_COMDAT_SELECT_EXACT_MATCH (4) /* Warn if different. */
#define IMAGE_COMDAT_SELECT_ASSOCIATIVE (5) /* Base on other section. */
/* Magic values that are true for all dos/nt implementations */
#define DOSMAGIC 0x5a4d
#define NT_SIGNATURE 0x00004550
/* NT allows long filenames, we want to accommodate this. This may break
some of the bfd functions */
#undef FILNMLEN
#define FILNMLEN 18 /* # characters in a file name */
#ifdef COFF_IMAGE_WITH_PE
/* The filehdr is only weired in images */
#undef FILHDR
struct external_PE_filehdr
{
/* DOS header fields */
unsigned short e_magic; /* Magic number, 0x5a4d */
unsigned short e_cblp; /* Bytes on last page of file, 0x90 */
unsigned short e_cp; /* Pages in file, 0x3 */
unsigned short e_crlc; /* Relocations, 0x0 */
unsigned short e_cparhdr; /* Size of header in paragraphs, 0x4 */
unsigned short e_minalloc; /* Minimum extra paragraphs needed, 0x0 */
unsigned short e_maxalloc; /* Maximum extra paragraphs needed, 0xFFFF */
unsigned short e_ss; /* Initial (relative) SS value, 0x0 */
unsigned short e_sp; /* Initial SP value, 0xb8 */
unsigned short e_csum; /* Checksum, 0x0 */
unsigned short e_ip; /* Initial IP value, 0x0 */
unsigned short e_cs; /* Initial (relative) CS value, 0x0 */
unsigned short e_lfarlc; /* File address of relocation table, 0x40 */
unsigned short e_ovno; /* Overlay number, 0x0 */
char e_res[4][2]; /* Reserved words, all 0x0 */
unsigned short e_oemid; /* OEM identifier (for e_oeminfo), 0x0 */
unsigned short e_oeminfo; /* OEM information; e_oemid specific, 0x0 */
char e_res2[10][2]; /* Reserved words, all 0x0 */
host_ulong e_lfanew; /* File address of new exe header, 0x80 */
char dos_message[16][4]; /* other stuff, always follow DOS header */
unsigned int nt_signature; /* required NT signature, 0x4550 */
/* From standard header */
unsigned short f_magic; /* magic number */
unsigned short f_nscns; /* number of sections */
host_ulong f_timdat; /* time & date stamp */
host_ulong f_symptr; /* file pointer to symtab */
host_ulong f_nsyms; /* number of symtab entries */
unsigned short f_opthdr; /* sizeof(optional hdr) */
unsigned short f_flags; /* flags */
};
#define FILHDR struct external_PE_filehdr
#undef FILHSZ
#define FILHSZ 152
#endif
typedef struct
{
unsigned short magic; /* type of file */
unsigned short vstamp; /* version stamp */
host_ulong tsize; /* text size in bytes, padded to FW bdry*/
host_ulong dsize; /* initialized data " " */
host_ulong bsize; /* uninitialized data " " */
host_ulong entry; /* entry pt. */
host_ulong text_start; /* base of text used for this file */
host_ulong data_start; /* base of all data used for this file */
/* NT extra fields; see internal.h for descriptions */
host_ulong ImageBase;
host_ulong SectionAlignment;
host_ulong FileAlignment;
unsigned short MajorOperatingSystemVersion;
unsigned short MinorOperatingSystemVersion;
unsigned short MajorImageVersion;
unsigned short MinorImageVersion;
unsigned short MajorSubsystemVersion;
unsigned short MinorSubsystemVersion;
char Reserved1[4];
host_ulong SizeOfImage;
host_ulong SizeOfHeaders;
host_ulong CheckSum;
unsigned short Subsystem;
unsigned short DllCharacteristics;
host_ulong SizeOfStackReserve;
host_ulong SizeOfStackCommit;
host_ulong SizeOfHeapReserve;
host_ulong SizeOfHeapCommit;
host_ulong LoaderFlags;
host_ulong NumberOfRvaAndSizes;
/* IMAGE_DATA_DIRECTORY DataDirectory[IMAGE_NUMBEROF_DIRECTORY_ENTRIES]; */
char DataDirectory[16][2][4]; /* 16 entries, 2 elements/entry, 4 chars */
} PEAOUTHDR;
#undef AOUTSZ
#define AOUTSZ (AOUTHDRSZ + 196)
#undef E_FILNMLEN
#define E_FILNMLEN 18 /* # characters in a file name */
#endif
/* end of coff/pe.h */
#define DT_NON (0) /* no derived type */
#define DT_PTR (1) /* pointer */
#define DT_FCN (2) /* function */
#define DT_ARY (3) /* array */
#define ISPTR(x) (((x) & N_TMASK) == (DT_PTR << N_BTSHFT))
#define ISFCN(x) (((x) & N_TMASK) == (DT_FCN << N_BTSHFT))
#define ISARY(x) (((x) & N_TMASK) == (DT_ARY << N_BTSHFT))
#ifdef __cplusplus
}
#endif
#endif /* _A_OUT_H_ */

View File

@@ -24,7 +24,7 @@
#include "qemu-common.h"
#include "qemu/acl.h"
#include "acl.h"
#ifdef CONFIG_FNMATCH
#include <fnmatch.h>
@@ -103,8 +103,8 @@ void qemu_acl_reset(qemu_acl *acl)
acl->defaultDeny = 1;
QTAILQ_FOREACH_SAFE(entry, &acl->entries, next, next_entry) {
QTAILQ_REMOVE(&acl->entries, entry, next);
g_free(entry->match);
g_free(entry);
free(entry->match);
free(entry);
}
acl->nentries = 0;
}
@@ -138,9 +138,9 @@ int qemu_acl_insert(qemu_acl *acl,
if (index <= 0)
return -1;
if (index > acl->nentries) {
if (index >= acl->nentries)
return qemu_acl_append(acl, deny, match);
}
entry = g_malloc(sizeof(*entry));
entry->match = g_strdup(match);
@@ -168,9 +168,6 @@ int qemu_acl_remove(qemu_acl *acl,
i++;
if (strcmp(entry->match, match) == 0) {
QTAILQ_REMOVE(&acl->entries, entry, next);
acl->nentries--;
g_free(entry->match);
g_free(entry);
return i;
}
}

View File

@@ -25,7 +25,7 @@
#ifndef __QEMU_ACL_H__
#define __QEMU_ACL_H__
#include "qemu/queue.h"
#include "qemu-queue.h"
typedef struct qemu_acl_entry qemu_acl_entry;
typedef struct qemu_acl qemu_acl;

View File

@@ -28,9 +28,14 @@
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include "qemu-common.h"
#include "qemu/aes.h"
#include "aes.h"
#ifndef NDEBUG
#define NDEBUG
#endif
typedef uint32_t u32;
typedef uint16_t u16;
typedef uint8_t u8;
/* This controls loop-unrolling in aes_core.c */
@@ -39,20 +44,20 @@ typedef uint8_t u8;
# define PUTU32(ct, st) { (ct)[0] = (u8)((st) >> 24); (ct)[1] = (u8)((st) >> 16); (ct)[2] = (u8)((st) >> 8); (ct)[3] = (u8)(st); }
/*
AES_Te0[x] = S [x].[02, 01, 01, 03];
AES_Te1[x] = S [x].[03, 02, 01, 01];
AES_Te2[x] = S [x].[01, 03, 02, 01];
AES_Te3[x] = S [x].[01, 01, 03, 02];
AES_Te4[x] = S [x].[01, 01, 01, 01];
Te0[x] = S [x].[02, 01, 01, 03];
Te1[x] = S [x].[03, 02, 01, 01];
Te2[x] = S [x].[01, 03, 02, 01];
Te3[x] = S [x].[01, 01, 03, 02];
Te4[x] = S [x].[01, 01, 01, 01];
AES_Td0[x] = Si[x].[0e, 09, 0d, 0b];
AES_Td1[x] = Si[x].[0b, 0e, 09, 0d];
AES_Td2[x] = Si[x].[0d, 0b, 0e, 09];
AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
AES_Td4[x] = Si[x].[01, 01, 01, 01];
Td0[x] = Si[x].[0e, 09, 0d, 0b];
Td1[x] = Si[x].[0b, 0e, 09, 0d];
Td2[x] = Si[x].[0d, 0b, 0e, 09];
Td3[x] = Si[x].[09, 0d, 0b, 0e];
Td4[x] = Si[x].[01, 01, 01, 01];
*/
const uint32_t AES_Te0[256] = {
static const u32 Te0[256] = {
0xc66363a5U, 0xf87c7c84U, 0xee777799U, 0xf67b7b8dU,
0xfff2f20dU, 0xd66b6bbdU, 0xde6f6fb1U, 0x91c5c554U,
0x60303050U, 0x02010103U, 0xce6767a9U, 0x562b2b7dU,
@@ -118,7 +123,7 @@ const uint32_t AES_Te0[256] = {
0x824141c3U, 0x299999b0U, 0x5a2d2d77U, 0x1e0f0f11U,
0x7bb0b0cbU, 0xa85454fcU, 0x6dbbbbd6U, 0x2c16163aU,
};
const uint32_t AES_Te1[256] = {
static const u32 Te1[256] = {
0xa5c66363U, 0x84f87c7cU, 0x99ee7777U, 0x8df67b7bU,
0x0dfff2f2U, 0xbdd66b6bU, 0xb1de6f6fU, 0x5491c5c5U,
0x50603030U, 0x03020101U, 0xa9ce6767U, 0x7d562b2bU,
@@ -184,7 +189,7 @@ const uint32_t AES_Te1[256] = {
0xc3824141U, 0xb0299999U, 0x775a2d2dU, 0x111e0f0fU,
0xcb7bb0b0U, 0xfca85454U, 0xd66dbbbbU, 0x3a2c1616U,
};
const uint32_t AES_Te2[256] = {
static const u32 Te2[256] = {
0x63a5c663U, 0x7c84f87cU, 0x7799ee77U, 0x7b8df67bU,
0xf20dfff2U, 0x6bbdd66bU, 0x6fb1de6fU, 0xc55491c5U,
0x30506030U, 0x01030201U, 0x67a9ce67U, 0x2b7d562bU,
@@ -250,7 +255,7 @@ const uint32_t AES_Te2[256] = {
0x41c38241U, 0x99b02999U, 0x2d775a2dU, 0x0f111e0fU,
0xb0cb7bb0U, 0x54fca854U, 0xbbd66dbbU, 0x163a2c16U,
};
const uint32_t AES_Te3[256] = {
static const u32 Te3[256] = {
0x6363a5c6U, 0x7c7c84f8U, 0x777799eeU, 0x7b7b8df6U,
0xf2f20dffU, 0x6b6bbdd6U, 0x6f6fb1deU, 0xc5c55491U,
@@ -317,7 +322,7 @@ const uint32_t AES_Te3[256] = {
0x4141c382U, 0x9999b029U, 0x2d2d775aU, 0x0f0f111eU,
0xb0b0cb7bU, 0x5454fca8U, 0xbbbbd66dU, 0x16163a2cU,
};
const uint32_t AES_Te4[256] = {
static const u32 Te4[256] = {
0x63636363U, 0x7c7c7c7cU, 0x77777777U, 0x7b7b7b7bU,
0xf2f2f2f2U, 0x6b6b6b6bU, 0x6f6f6f6fU, 0xc5c5c5c5U,
0x30303030U, 0x01010101U, 0x67676767U, 0x2b2b2b2bU,
@@ -383,7 +388,7 @@ const uint32_t AES_Te4[256] = {
0x41414141U, 0x99999999U, 0x2d2d2d2dU, 0x0f0f0f0fU,
0xb0b0b0b0U, 0x54545454U, 0xbbbbbbbbU, 0x16161616U,
};
const uint32_t AES_Td0[256] = {
static const u32 Td0[256] = {
0x51f4a750U, 0x7e416553U, 0x1a17a4c3U, 0x3a275e96U,
0x3bab6bcbU, 0x1f9d45f1U, 0xacfa58abU, 0x4be30393U,
0x2030fa55U, 0xad766df6U, 0x88cc7691U, 0xf5024c25U,
@@ -449,7 +454,7 @@ const uint32_t AES_Td0[256] = {
0x39a80171U, 0x080cb3deU, 0xd8b4e49cU, 0x6456c190U,
0x7bcb8461U, 0xd532b670U, 0x486c5c74U, 0xd0b85742U,
};
const uint32_t AES_Td1[256] = {
static const u32 Td1[256] = {
0x5051f4a7U, 0x537e4165U, 0xc31a17a4U, 0x963a275eU,
0xcb3bab6bU, 0xf11f9d45U, 0xabacfa58U, 0x934be303U,
0x552030faU, 0xf6ad766dU, 0x9188cc76U, 0x25f5024cU,
@@ -515,7 +520,7 @@ const uint32_t AES_Td1[256] = {
0x7139a801U, 0xde080cb3U, 0x9cd8b4e4U, 0x906456c1U,
0x617bcb84U, 0x70d532b6U, 0x74486c5cU, 0x42d0b857U,
};
const uint32_t AES_Td2[256] = {
static const u32 Td2[256] = {
0xa75051f4U, 0x65537e41U, 0xa4c31a17U, 0x5e963a27U,
0x6bcb3babU, 0x45f11f9dU, 0x58abacfaU, 0x03934be3U,
0xfa552030U, 0x6df6ad76U, 0x769188ccU, 0x4c25f502U,
@@ -582,7 +587,7 @@ const uint32_t AES_Td2[256] = {
0x017139a8U, 0xb3de080cU, 0xe49cd8b4U, 0xc1906456U,
0x84617bcbU, 0xb670d532U, 0x5c74486cU, 0x5742d0b8U,
};
const uint32_t AES_Td3[256] = {
static const u32 Td3[256] = {
0xf4a75051U, 0x4165537eU, 0x17a4c31aU, 0x275e963aU,
0xab6bcb3bU, 0x9d45f11fU, 0xfa58abacU, 0xe303934bU,
0x30fa5520U, 0x766df6adU, 0xcc769188U, 0x024c25f5U,
@@ -648,7 +653,7 @@ const uint32_t AES_Td3[256] = {
0xa8017139U, 0x0cb3de08U, 0xb4e49cd8U, 0x56c19064U,
0xcb84617bU, 0x32b670d5U, 0x6c5c7448U, 0xb85742d0U,
};
const uint32_t AES_Td4[256] = {
static const u32 Td4[256] = {
0x52525252U, 0x09090909U, 0x6a6a6a6aU, 0xd5d5d5d5U,
0x30303030U, 0x36363636U, 0xa5a5a5a5U, 0x38383838U,
0xbfbfbfbfU, 0x40404040U, 0xa3a3a3a3U, 0x9e9e9e9eU,
@@ -752,10 +757,10 @@ int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
while (1) {
temp = rk[3];
rk[4] = rk[0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
(Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(Te4[(temp ) & 0xff] & 0x0000ff00) ^
(Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[5] = rk[1] ^ rk[4];
rk[6] = rk[2] ^ rk[5];
@@ -772,10 +777,10 @@ int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
while (1) {
temp = rk[ 5];
rk[ 6] = rk[ 0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
(Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(Te4[(temp ) & 0xff] & 0x0000ff00) ^
(Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[ 7] = rk[ 1] ^ rk[ 6];
rk[ 8] = rk[ 2] ^ rk[ 7];
@@ -794,10 +799,10 @@ int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
while (1) {
temp = rk[ 7];
rk[ 8] = rk[ 0] ^
(AES_Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp ) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp >> 24) ] & 0x000000ff) ^
(Te4[(temp >> 16) & 0xff] & 0xff000000) ^
(Te4[(temp >> 8) & 0xff] & 0x00ff0000) ^
(Te4[(temp ) & 0xff] & 0x0000ff00) ^
(Te4[(temp >> 24) ] & 0x000000ff) ^
rcon[i];
rk[ 9] = rk[ 1] ^ rk[ 8];
rk[10] = rk[ 2] ^ rk[ 9];
@@ -807,10 +812,10 @@ int AES_set_encrypt_key(const unsigned char *userKey, const int bits,
}
temp = rk[11];
rk[12] = rk[ 4] ^
(AES_Te4[(temp >> 24) ] & 0xff000000) ^
(AES_Te4[(temp >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(temp >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(temp ) & 0xff] & 0x000000ff);
(Te4[(temp >> 24) ] & 0xff000000) ^
(Te4[(temp >> 16) & 0xff] & 0x00ff0000) ^
(Te4[(temp >> 8) & 0xff] & 0x0000ff00) ^
(Te4[(temp ) & 0xff] & 0x000000ff);
rk[13] = rk[ 5] ^ rk[12];
rk[14] = rk[ 6] ^ rk[13];
rk[15] = rk[ 7] ^ rk[14];
@@ -849,25 +854,25 @@ int AES_set_decrypt_key(const unsigned char *userKey, const int bits,
for (i = 1; i < (key->rounds); i++) {
rk += 4;
rk[0] =
AES_Td0[AES_Te4[(rk[0] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[0] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[0] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[0] ) & 0xff] & 0xff];
Td0[Te4[(rk[0] >> 24) ] & 0xff] ^
Td1[Te4[(rk[0] >> 16) & 0xff] & 0xff] ^
Td2[Te4[(rk[0] >> 8) & 0xff] & 0xff] ^
Td3[Te4[(rk[0] ) & 0xff] & 0xff];
rk[1] =
AES_Td0[AES_Te4[(rk[1] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[1] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[1] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[1] ) & 0xff] & 0xff];
Td0[Te4[(rk[1] >> 24) ] & 0xff] ^
Td1[Te4[(rk[1] >> 16) & 0xff] & 0xff] ^
Td2[Te4[(rk[1] >> 8) & 0xff] & 0xff] ^
Td3[Te4[(rk[1] ) & 0xff] & 0xff];
rk[2] =
AES_Td0[AES_Te4[(rk[2] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[2] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[2] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[2] ) & 0xff] & 0xff];
Td0[Te4[(rk[2] >> 24) ] & 0xff] ^
Td1[Te4[(rk[2] >> 16) & 0xff] & 0xff] ^
Td2[Te4[(rk[2] >> 8) & 0xff] & 0xff] ^
Td3[Te4[(rk[2] ) & 0xff] & 0xff];
rk[3] =
AES_Td0[AES_Te4[(rk[3] >> 24) ] & 0xff] ^
AES_Td1[AES_Te4[(rk[3] >> 16) & 0xff] & 0xff] ^
AES_Td2[AES_Te4[(rk[3] >> 8) & 0xff] & 0xff] ^
AES_Td3[AES_Te4[(rk[3] ) & 0xff] & 0xff];
Td0[Te4[(rk[3] >> 24) ] & 0xff] ^
Td1[Te4[(rk[3] >> 16) & 0xff] & 0xff] ^
Td2[Te4[(rk[3] >> 8) & 0xff] & 0xff] ^
Td3[Te4[(rk[3] ) & 0xff] & 0xff];
}
return 0;
}
@@ -899,72 +904,72 @@ void AES_encrypt(const unsigned char *in, unsigned char *out,
s3 = GETU32(in + 12) ^ rk[3];
#ifdef FULL_UNROLL
/* round 1: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[ 4];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[ 5];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[ 6];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[ 7];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[ 4];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[ 5];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[ 6];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[ 7];
/* round 2: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[ 8];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[ 9];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[10];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[11];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[ 8];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[ 9];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[10];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[11];
/* round 3: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[12];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[13];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[14];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[15];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[12];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[13];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[14];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[15];
/* round 4: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[16];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[17];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[18];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[19];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[16];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[17];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[18];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[19];
/* round 5: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[20];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[21];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[22];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[23];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[20];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[21];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[22];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[23];
/* round 6: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[24];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[25];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[26];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[27];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[24];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[25];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[26];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[27];
/* round 7: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[28];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[29];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[30];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[31];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[28];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[29];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[30];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[31];
/* round 8: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[32];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[33];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[34];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[35];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[32];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[33];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[34];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[35];
/* round 9: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[36];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[37];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[38];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[39];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[36];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[37];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[38];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[39];
if (key->rounds > 10) {
/* round 10: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[40];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[41];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[42];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[43];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[40];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[41];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[42];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[43];
/* round 11: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[44];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[45];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[46];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[47];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[44];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[45];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[46];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[47];
if (key->rounds > 12) {
/* round 12: */
s0 = AES_Te0[t0 >> 24] ^ AES_Te1[(t1 >> 16) & 0xff] ^ AES_Te2[(t2 >> 8) & 0xff] ^ AES_Te3[t3 & 0xff] ^ rk[48];
s1 = AES_Te0[t1 >> 24] ^ AES_Te1[(t2 >> 16) & 0xff] ^ AES_Te2[(t3 >> 8) & 0xff] ^ AES_Te3[t0 & 0xff] ^ rk[49];
s2 = AES_Te0[t2 >> 24] ^ AES_Te1[(t3 >> 16) & 0xff] ^ AES_Te2[(t0 >> 8) & 0xff] ^ AES_Te3[t1 & 0xff] ^ rk[50];
s3 = AES_Te0[t3 >> 24] ^ AES_Te1[(t0 >> 16) & 0xff] ^ AES_Te2[(t1 >> 8) & 0xff] ^ AES_Te3[t2 & 0xff] ^ rk[51];
s0 = Te0[t0 >> 24] ^ Te1[(t1 >> 16) & 0xff] ^ Te2[(t2 >> 8) & 0xff] ^ Te3[t3 & 0xff] ^ rk[48];
s1 = Te0[t1 >> 24] ^ Te1[(t2 >> 16) & 0xff] ^ Te2[(t3 >> 8) & 0xff] ^ Te3[t0 & 0xff] ^ rk[49];
s2 = Te0[t2 >> 24] ^ Te1[(t3 >> 16) & 0xff] ^ Te2[(t0 >> 8) & 0xff] ^ Te3[t1 & 0xff] ^ rk[50];
s3 = Te0[t3 >> 24] ^ Te1[(t0 >> 16) & 0xff] ^ Te2[(t1 >> 8) & 0xff] ^ Te3[t2 & 0xff] ^ rk[51];
/* round 13: */
t0 = AES_Te0[s0 >> 24] ^ AES_Te1[(s1 >> 16) & 0xff] ^ AES_Te2[(s2 >> 8) & 0xff] ^ AES_Te3[s3 & 0xff] ^ rk[52];
t1 = AES_Te0[s1 >> 24] ^ AES_Te1[(s2 >> 16) & 0xff] ^ AES_Te2[(s3 >> 8) & 0xff] ^ AES_Te3[s0 & 0xff] ^ rk[53];
t2 = AES_Te0[s2 >> 24] ^ AES_Te1[(s3 >> 16) & 0xff] ^ AES_Te2[(s0 >> 8) & 0xff] ^ AES_Te3[s1 & 0xff] ^ rk[54];
t3 = AES_Te0[s3 >> 24] ^ AES_Te1[(s0 >> 16) & 0xff] ^ AES_Te2[(s1 >> 8) & 0xff] ^ AES_Te3[s2 & 0xff] ^ rk[55];
t0 = Te0[s0 >> 24] ^ Te1[(s1 >> 16) & 0xff] ^ Te2[(s2 >> 8) & 0xff] ^ Te3[s3 & 0xff] ^ rk[52];
t1 = Te0[s1 >> 24] ^ Te1[(s2 >> 16) & 0xff] ^ Te2[(s3 >> 8) & 0xff] ^ Te3[s0 & 0xff] ^ rk[53];
t2 = Te0[s2 >> 24] ^ Te1[(s3 >> 16) & 0xff] ^ Te2[(s0 >> 8) & 0xff] ^ Te3[s1 & 0xff] ^ rk[54];
t3 = Te0[s3 >> 24] ^ Te1[(s0 >> 16) & 0xff] ^ Te2[(s1 >> 8) & 0xff] ^ Te3[s2 & 0xff] ^ rk[55];
}
}
rk += key->rounds << 2;
@@ -975,28 +980,28 @@ void AES_encrypt(const unsigned char *in, unsigned char *out,
r = key->rounds >> 1;
for (;;) {
t0 =
AES_Te0[(s0 >> 24) ] ^
AES_Te1[(s1 >> 16) & 0xff] ^
AES_Te2[(s2 >> 8) & 0xff] ^
AES_Te3[(s3 ) & 0xff] ^
Te0[(s0 >> 24) ] ^
Te1[(s1 >> 16) & 0xff] ^
Te2[(s2 >> 8) & 0xff] ^
Te3[(s3 ) & 0xff] ^
rk[4];
t1 =
AES_Te0[(s1 >> 24) ] ^
AES_Te1[(s2 >> 16) & 0xff] ^
AES_Te2[(s3 >> 8) & 0xff] ^
AES_Te3[(s0 ) & 0xff] ^
Te0[(s1 >> 24) ] ^
Te1[(s2 >> 16) & 0xff] ^
Te2[(s3 >> 8) & 0xff] ^
Te3[(s0 ) & 0xff] ^
rk[5];
t2 =
AES_Te0[(s2 >> 24) ] ^
AES_Te1[(s3 >> 16) & 0xff] ^
AES_Te2[(s0 >> 8) & 0xff] ^
AES_Te3[(s1 ) & 0xff] ^
Te0[(s2 >> 24) ] ^
Te1[(s3 >> 16) & 0xff] ^
Te2[(s0 >> 8) & 0xff] ^
Te3[(s1 ) & 0xff] ^
rk[6];
t3 =
AES_Te0[(s3 >> 24) ] ^
AES_Te1[(s0 >> 16) & 0xff] ^
AES_Te2[(s1 >> 8) & 0xff] ^
AES_Te3[(s2 ) & 0xff] ^
Te0[(s3 >> 24) ] ^
Te1[(s0 >> 16) & 0xff] ^
Te2[(s1 >> 8) & 0xff] ^
Te3[(s2 ) & 0xff] ^
rk[7];
rk += 8;
@@ -1005,28 +1010,28 @@ void AES_encrypt(const unsigned char *in, unsigned char *out,
}
s0 =
AES_Te0[(t0 >> 24) ] ^
AES_Te1[(t1 >> 16) & 0xff] ^
AES_Te2[(t2 >> 8) & 0xff] ^
AES_Te3[(t3 ) & 0xff] ^
Te0[(t0 >> 24) ] ^
Te1[(t1 >> 16) & 0xff] ^
Te2[(t2 >> 8) & 0xff] ^
Te3[(t3 ) & 0xff] ^
rk[0];
s1 =
AES_Te0[(t1 >> 24) ] ^
AES_Te1[(t2 >> 16) & 0xff] ^
AES_Te2[(t3 >> 8) & 0xff] ^
AES_Te3[(t0 ) & 0xff] ^
Te0[(t1 >> 24) ] ^
Te1[(t2 >> 16) & 0xff] ^
Te2[(t3 >> 8) & 0xff] ^
Te3[(t0 ) & 0xff] ^
rk[1];
s2 =
AES_Te0[(t2 >> 24) ] ^
AES_Te1[(t3 >> 16) & 0xff] ^
AES_Te2[(t0 >> 8) & 0xff] ^
AES_Te3[(t1 ) & 0xff] ^
Te0[(t2 >> 24) ] ^
Te1[(t3 >> 16) & 0xff] ^
Te2[(t0 >> 8) & 0xff] ^
Te3[(t1 ) & 0xff] ^
rk[2];
s3 =
AES_Te0[(t3 >> 24) ] ^
AES_Te1[(t0 >> 16) & 0xff] ^
AES_Te2[(t1 >> 8) & 0xff] ^
AES_Te3[(t2 ) & 0xff] ^
Te0[(t3 >> 24) ] ^
Te1[(t0 >> 16) & 0xff] ^
Te2[(t1 >> 8) & 0xff] ^
Te3[(t2 ) & 0xff] ^
rk[3];
}
#endif /* ?FULL_UNROLL */
@@ -1035,31 +1040,31 @@ void AES_encrypt(const unsigned char *in, unsigned char *out,
* map cipher state to byte array block:
*/
s0 =
(AES_Te4[(t0 >> 24) ] & 0xff000000) ^
(AES_Te4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t3 ) & 0xff] & 0x000000ff) ^
(Te4[(t0 >> 24) ] & 0xff000000) ^
(Te4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(Te4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(Te4[(t3 ) & 0xff] & 0x000000ff) ^
rk[0];
PUTU32(out , s0);
s1 =
(AES_Te4[(t1 >> 24) ] & 0xff000000) ^
(AES_Te4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t0 ) & 0xff] & 0x000000ff) ^
(Te4[(t1 >> 24) ] & 0xff000000) ^
(Te4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(Te4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(Te4[(t0 ) & 0xff] & 0x000000ff) ^
rk[1];
PUTU32(out + 4, s1);
s2 =
(AES_Te4[(t2 >> 24) ] & 0xff000000) ^
(AES_Te4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t1 ) & 0xff] & 0x000000ff) ^
(Te4[(t2 >> 24) ] & 0xff000000) ^
(Te4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(Te4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(Te4[(t1 ) & 0xff] & 0x000000ff) ^
rk[2];
PUTU32(out + 8, s2);
s3 =
(AES_Te4[(t3 >> 24) ] & 0xff000000) ^
(AES_Te4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Te4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Te4[(t2 ) & 0xff] & 0x000000ff) ^
(Te4[(t3 >> 24) ] & 0xff000000) ^
(Te4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(Te4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(Te4[(t2 ) & 0xff] & 0x000000ff) ^
rk[3];
PUTU32(out + 12, s3);
}
@@ -1090,72 +1095,72 @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
s3 = GETU32(in + 12) ^ rk[3];
#ifdef FULL_UNROLL
/* round 1: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[ 4];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[ 5];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[ 6];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[ 7];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[ 4];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[ 5];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[ 6];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[ 7];
/* round 2: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[ 8];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[ 9];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[10];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[11];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[ 8];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[ 9];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[10];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[11];
/* round 3: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[12];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[13];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[14];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[15];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[12];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[13];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[14];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[15];
/* round 4: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[16];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[17];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[18];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[19];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[16];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[17];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[18];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[19];
/* round 5: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[20];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[21];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[22];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[23];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[20];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[21];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[22];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[23];
/* round 6: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[24];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[25];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[26];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[27];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[24];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[25];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[26];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[27];
/* round 7: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[28];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[29];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[30];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[31];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[28];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[29];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[30];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[31];
/* round 8: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[32];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[33];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[34];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[35];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[32];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[33];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[34];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[35];
/* round 9: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[36];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[37];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[38];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[39];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[36];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[37];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[38];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[39];
if (key->rounds > 10) {
/* round 10: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[40];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[41];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[42];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[43];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[40];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[41];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[42];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[43];
/* round 11: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[44];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[45];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[46];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[47];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[44];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[45];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[46];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[47];
if (key->rounds > 12) {
/* round 12: */
s0 = AES_Td0[t0 >> 24] ^ AES_Td1[(t3 >> 16) & 0xff] ^ AES_Td2[(t2 >> 8) & 0xff] ^ AES_Td3[t1 & 0xff] ^ rk[48];
s1 = AES_Td0[t1 >> 24] ^ AES_Td1[(t0 >> 16) & 0xff] ^ AES_Td2[(t3 >> 8) & 0xff] ^ AES_Td3[t2 & 0xff] ^ rk[49];
s2 = AES_Td0[t2 >> 24] ^ AES_Td1[(t1 >> 16) & 0xff] ^ AES_Td2[(t0 >> 8) & 0xff] ^ AES_Td3[t3 & 0xff] ^ rk[50];
s3 = AES_Td0[t3 >> 24] ^ AES_Td1[(t2 >> 16) & 0xff] ^ AES_Td2[(t1 >> 8) & 0xff] ^ AES_Td3[t0 & 0xff] ^ rk[51];
s0 = Td0[t0 >> 24] ^ Td1[(t3 >> 16) & 0xff] ^ Td2[(t2 >> 8) & 0xff] ^ Td3[t1 & 0xff] ^ rk[48];
s1 = Td0[t1 >> 24] ^ Td1[(t0 >> 16) & 0xff] ^ Td2[(t3 >> 8) & 0xff] ^ Td3[t2 & 0xff] ^ rk[49];
s2 = Td0[t2 >> 24] ^ Td1[(t1 >> 16) & 0xff] ^ Td2[(t0 >> 8) & 0xff] ^ Td3[t3 & 0xff] ^ rk[50];
s3 = Td0[t3 >> 24] ^ Td1[(t2 >> 16) & 0xff] ^ Td2[(t1 >> 8) & 0xff] ^ Td3[t0 & 0xff] ^ rk[51];
/* round 13: */
t0 = AES_Td0[s0 >> 24] ^ AES_Td1[(s3 >> 16) & 0xff] ^ AES_Td2[(s2 >> 8) & 0xff] ^ AES_Td3[s1 & 0xff] ^ rk[52];
t1 = AES_Td0[s1 >> 24] ^ AES_Td1[(s0 >> 16) & 0xff] ^ AES_Td2[(s3 >> 8) & 0xff] ^ AES_Td3[s2 & 0xff] ^ rk[53];
t2 = AES_Td0[s2 >> 24] ^ AES_Td1[(s1 >> 16) & 0xff] ^ AES_Td2[(s0 >> 8) & 0xff] ^ AES_Td3[s3 & 0xff] ^ rk[54];
t3 = AES_Td0[s3 >> 24] ^ AES_Td1[(s2 >> 16) & 0xff] ^ AES_Td2[(s1 >> 8) & 0xff] ^ AES_Td3[s0 & 0xff] ^ rk[55];
t0 = Td0[s0 >> 24] ^ Td1[(s3 >> 16) & 0xff] ^ Td2[(s2 >> 8) & 0xff] ^ Td3[s1 & 0xff] ^ rk[52];
t1 = Td0[s1 >> 24] ^ Td1[(s0 >> 16) & 0xff] ^ Td2[(s3 >> 8) & 0xff] ^ Td3[s2 & 0xff] ^ rk[53];
t2 = Td0[s2 >> 24] ^ Td1[(s1 >> 16) & 0xff] ^ Td2[(s0 >> 8) & 0xff] ^ Td3[s3 & 0xff] ^ rk[54];
t3 = Td0[s3 >> 24] ^ Td1[(s2 >> 16) & 0xff] ^ Td2[(s1 >> 8) & 0xff] ^ Td3[s0 & 0xff] ^ rk[55];
}
}
rk += key->rounds << 2;
@@ -1166,28 +1171,28 @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
r = key->rounds >> 1;
for (;;) {
t0 =
AES_Td0[(s0 >> 24) ] ^
AES_Td1[(s3 >> 16) & 0xff] ^
AES_Td2[(s2 >> 8) & 0xff] ^
AES_Td3[(s1 ) & 0xff] ^
Td0[(s0 >> 24) ] ^
Td1[(s3 >> 16) & 0xff] ^
Td2[(s2 >> 8) & 0xff] ^
Td3[(s1 ) & 0xff] ^
rk[4];
t1 =
AES_Td0[(s1 >> 24) ] ^
AES_Td1[(s0 >> 16) & 0xff] ^
AES_Td2[(s3 >> 8) & 0xff] ^
AES_Td3[(s2 ) & 0xff] ^
Td0[(s1 >> 24) ] ^
Td1[(s0 >> 16) & 0xff] ^
Td2[(s3 >> 8) & 0xff] ^
Td3[(s2 ) & 0xff] ^
rk[5];
t2 =
AES_Td0[(s2 >> 24) ] ^
AES_Td1[(s1 >> 16) & 0xff] ^
AES_Td2[(s0 >> 8) & 0xff] ^
AES_Td3[(s3 ) & 0xff] ^
Td0[(s2 >> 24) ] ^
Td1[(s1 >> 16) & 0xff] ^
Td2[(s0 >> 8) & 0xff] ^
Td3[(s3 ) & 0xff] ^
rk[6];
t3 =
AES_Td0[(s3 >> 24) ] ^
AES_Td1[(s2 >> 16) & 0xff] ^
AES_Td2[(s1 >> 8) & 0xff] ^
AES_Td3[(s0 ) & 0xff] ^
Td0[(s3 >> 24) ] ^
Td1[(s2 >> 16) & 0xff] ^
Td2[(s1 >> 8) & 0xff] ^
Td3[(s0 ) & 0xff] ^
rk[7];
rk += 8;
@@ -1196,28 +1201,28 @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
}
s0 =
AES_Td0[(t0 >> 24) ] ^
AES_Td1[(t3 >> 16) & 0xff] ^
AES_Td2[(t2 >> 8) & 0xff] ^
AES_Td3[(t1 ) & 0xff] ^
Td0[(t0 >> 24) ] ^
Td1[(t3 >> 16) & 0xff] ^
Td2[(t2 >> 8) & 0xff] ^
Td3[(t1 ) & 0xff] ^
rk[0];
s1 =
AES_Td0[(t1 >> 24) ] ^
AES_Td1[(t0 >> 16) & 0xff] ^
AES_Td2[(t3 >> 8) & 0xff] ^
AES_Td3[(t2 ) & 0xff] ^
Td0[(t1 >> 24) ] ^
Td1[(t0 >> 16) & 0xff] ^
Td2[(t3 >> 8) & 0xff] ^
Td3[(t2 ) & 0xff] ^
rk[1];
s2 =
AES_Td0[(t2 >> 24) ] ^
AES_Td1[(t1 >> 16) & 0xff] ^
AES_Td2[(t0 >> 8) & 0xff] ^
AES_Td3[(t3 ) & 0xff] ^
Td0[(t2 >> 24) ] ^
Td1[(t1 >> 16) & 0xff] ^
Td2[(t0 >> 8) & 0xff] ^
Td3[(t3 ) & 0xff] ^
rk[2];
s3 =
AES_Td0[(t3 >> 24) ] ^
AES_Td1[(t2 >> 16) & 0xff] ^
AES_Td2[(t1 >> 8) & 0xff] ^
AES_Td3[(t0 ) & 0xff] ^
Td0[(t3 >> 24) ] ^
Td1[(t2 >> 16) & 0xff] ^
Td2[(t1 >> 8) & 0xff] ^
Td3[(t0 ) & 0xff] ^
rk[3];
}
#endif /* ?FULL_UNROLL */
@@ -1226,31 +1231,31 @@ void AES_decrypt(const unsigned char *in, unsigned char *out,
* map cipher state to byte array block:
*/
s0 =
(AES_Td4[(t0 >> 24) ] & 0xff000000) ^
(AES_Td4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t1 ) & 0xff] & 0x000000ff) ^
(Td4[(t0 >> 24) ] & 0xff000000) ^
(Td4[(t3 >> 16) & 0xff] & 0x00ff0000) ^
(Td4[(t2 >> 8) & 0xff] & 0x0000ff00) ^
(Td4[(t1 ) & 0xff] & 0x000000ff) ^
rk[0];
PUTU32(out , s0);
s1 =
(AES_Td4[(t1 >> 24) ] & 0xff000000) ^
(AES_Td4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t2 ) & 0xff] & 0x000000ff) ^
(Td4[(t1 >> 24) ] & 0xff000000) ^
(Td4[(t0 >> 16) & 0xff] & 0x00ff0000) ^
(Td4[(t3 >> 8) & 0xff] & 0x0000ff00) ^
(Td4[(t2 ) & 0xff] & 0x000000ff) ^
rk[1];
PUTU32(out + 4, s1);
s2 =
(AES_Td4[(t2 >> 24) ] & 0xff000000) ^
(AES_Td4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t3 ) & 0xff] & 0x000000ff) ^
(Td4[(t2 >> 24) ] & 0xff000000) ^
(Td4[(t1 >> 16) & 0xff] & 0x00ff0000) ^
(Td4[(t0 >> 8) & 0xff] & 0x0000ff00) ^
(Td4[(t3 ) & 0xff] & 0x000000ff) ^
rk[2];
PUTU32(out + 8, s2);
s3 =
(AES_Td4[(t3 >> 24) ] & 0xff000000) ^
(AES_Td4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(AES_Td4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(AES_Td4[(t0 ) & 0xff] & 0x000000ff) ^
(Td4[(t3 >> 24) ] & 0xff000000) ^
(Td4[(t2 >> 16) & 0xff] & 0x00ff0000) ^
(Td4[(t1 >> 8) & 0xff] & 0x0000ff00) ^
(Td4[(t0 ) & 0xff] & 0x000000ff) ^
rk[3];
PUTU32(out + 12, s3);
}

View File

@@ -23,23 +23,4 @@ void AES_cbc_encrypt(const unsigned char *in, unsigned char *out,
const unsigned long length, const AES_KEY *key,
unsigned char *ivec, const int enc);
/*
AES_Te0[x] = S [x].[02, 01, 01, 03];
AES_Te1[x] = S [x].[03, 02, 01, 01];
AES_Te2[x] = S [x].[01, 03, 02, 01];
AES_Te3[x] = S [x].[01, 01, 03, 02];
AES_Te4[x] = S [x].[01, 01, 01, 01];
AES_Td0[x] = Si[x].[0e, 09, 0d, 0b];
AES_Td1[x] = Si[x].[0b, 0e, 09, 0d];
AES_Td2[x] = Si[x].[0d, 0b, 0e, 09];
AES_Td3[x] = Si[x].[09, 0d, 0b, 0e];
AES_Td4[x] = Si[x].[01, 01, 01, 01];
*/
extern const uint32_t AES_Te0[256], AES_Te1[256], AES_Te2[256],
AES_Te3[256], AES_Te4[256];
extern const uint32_t AES_Td0[256], AES_Td1[256], AES_Td2[256],
AES_Td3[256], AES_Td4[256];
#endif

View File

@@ -14,17 +14,17 @@
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
#include "block.h"
#include "qemu-queue.h"
#include "qemu_socket.h"
struct AioHandler
{
GPollFD pfd;
IOHandler *io_read;
IOHandler *io_write;
AioFlushHandler *io_flush;
int deleted;
int pollfds_idx;
void *opaque;
QLIST_ENTRY(AioHandler) node;
};
@@ -46,6 +46,7 @@ void aio_set_fd_handler(AioContext *ctx,
int fd,
IOHandler *io_read,
IOHandler *io_write,
AioFlushHandler *io_flush,
void *opaque)
{
AioHandler *node;
@@ -82,11 +83,11 @@ void aio_set_fd_handler(AioContext *ctx,
/* Update handler with latest information */
node->io_read = io_read;
node->io_write = io_write;
node->io_flush = io_flush;
node->opaque = opaque;
node->pollfds_idx = -1;
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP | G_IO_ERR : 0);
node->pfd.events |= (io_write ? G_IO_OUT | G_IO_ERR : 0);
node->pfd.events = (io_read ? G_IO_IN | G_IO_HUP : 0);
node->pfd.events |= (io_write ? G_IO_OUT : 0);
}
aio_notify(ctx);
@@ -94,10 +95,12 @@ void aio_set_fd_handler(AioContext *ctx,
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *notifier,
EventNotifierHandler *io_read)
EventNotifierHandler *io_read,
AioFlushEventNotifierHandler *io_flush)
{
aio_set_fd_handler(ctx, event_notifier_get_fd(notifier),
(IOHandler *)io_read, NULL, notifier);
(IOHandler *)io_read, NULL,
(AioFlushHandler *)io_flush, notifier);
}
bool aio_pending(AioContext *ctx)
@@ -107,6 +110,13 @@ bool aio_pending(AioContext *ctx)
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
int revents;
/*
* FIXME: right now we cannot get G_IO_HUP and G_IO_ERR because
* main-loop.c is still select based (due to the slirp legacy).
* If main-loop.c ever switches to poll, G_IO_ERR should be
* tested too. Dispatching G_IO_ERR to both handlers should be
* okay, since handlers need to be ready for spurious wakeups.
*/
revents = node->pfd.revents & node->pfd.events;
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
return true;
@@ -119,12 +129,30 @@ bool aio_pending(AioContext *ctx)
return false;
}
static bool aio_dispatch(AioContext *ctx)
bool aio_poll(AioContext *ctx, bool blocking)
{
static struct timeval tv0;
AioHandler *node;
bool progress = false;
fd_set rdfds, wrfds;
int max_fd = -1;
int ret;
bool busy, progress;
progress = false;
/*
* If there are callbacks left that have been queued, we need to call then.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
/*
* Then dispatch any pending callbacks from the GSource.
*
* We have to walk very carefully in case qemu_aio_set_fd_handler is
* called while we're walking.
*/
@@ -138,19 +166,12 @@ static bool aio_dispatch(AioContext *ctx)
revents = node->pfd.revents & node->pfd.events;
node->pfd.revents = 0;
if (!node->deleted &&
(revents & (G_IO_IN | G_IO_HUP | G_IO_ERR)) &&
node->io_read) {
/* See comment in aio_pending. */
if (revents & (G_IO_IN | G_IO_HUP | G_IO_ERR) && node->io_read) {
node->io_read(node->opaque);
/* aio_notify() does not count as progress */
if (node->opaque != &ctx->notifier) {
progress = true;
}
progress = true;
}
if (!node->deleted &&
(revents & (G_IO_OUT | G_IO_ERR)) &&
node->io_write) {
if (revents & (G_IO_OUT | G_IO_ERR) && node->io_write) {
node->io_write(node->opaque);
progress = true;
}
@@ -166,77 +187,83 @@ static bool aio_dispatch(AioContext *ctx)
}
}
/* Run our timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
return progress;
}
bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
int ret;
bool progress;
progress = false;
/*
* If there are callbacks left that have been queued, we need to call them.
* Do not call select in this case, because it is possible that the caller
* does not need a complete flush (as is the case for qemu_aio_wait loops).
*/
if (aio_bh_poll(ctx)) {
blocking = false;
progress = true;
}
if (aio_dispatch(ctx)) {
progress = true;
}
if (progress && !blocking) {
return true;
}
ctx->walking_handlers++;
g_array_set_size(ctx->pollfds, 0);
FD_ZERO(&rdfds);
FD_ZERO(&wrfds);
/* fill pollfds */
/* fill fd sets */
busy = false;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
node->pollfds_idx = -1;
if (!node->deleted && node->pfd.events) {
GPollFD pfd = {
.fd = node->pfd.fd,
.events = node->pfd.events,
};
node->pollfds_idx = ctx->pollfds->len;
g_array_append_val(ctx->pollfds, pfd);
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (!node->deleted && node->io_flush) {
if (node->io_flush(node->opaque) == 0) {
continue;
}
busy = true;
}
if (!node->deleted && node->io_read) {
FD_SET(node->pfd.fd, &rdfds);
max_fd = MAX(max_fd, node->pfd.fd + 1);
}
if (!node->deleted && node->io_write) {
FD_SET(node->pfd.fd, &wrfds);
max_fd = MAX(max_fd, node->pfd.fd + 1);
}
}
ctx->walking_handlers--;
/* No AIO operations? Get us out of here */
if (!busy) {
return progress;
}
/* wait until next event */
ret = qemu_poll_ns((GPollFD *)ctx->pollfds->data,
ctx->pollfds->len,
blocking ? timerlistgroup_deadline_ns(&ctx->tlg) : 0);
ret = select(max_fd, &rdfds, &wrfds, NULL, blocking ? NULL : &tv0);
/* if we have any readable fds, dispatch event */
if (ret > 0) {
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
if (node->pollfds_idx != -1) {
GPollFD *pfd = &g_array_index(ctx->pollfds, GPollFD,
node->pollfds_idx);
node->pfd.revents = pfd->revents;
/* we have to walk very carefully in case
* qemu_aio_set_fd_handler is called while we're walking */
node = QLIST_FIRST(&ctx->aio_handlers);
while (node) {
AioHandler *tmp;
ctx->walking_handlers++;
if (!node->deleted &&
FD_ISSET(node->pfd.fd, &rdfds) &&
node->io_read) {
node->io_read(node->opaque);
progress = true;
}
if (!node->deleted &&
FD_ISSET(node->pfd.fd, &wrfds) &&
node->io_write) {
node->io_write(node->opaque);
progress = true;
}
tmp = node;
node = QLIST_NEXT(node, node);
ctx->walking_handlers--;
if (!ctx->walking_handlers && tmp->deleted) {
QLIST_REMOVE(tmp, node);
g_free(tmp);
}
}
}
/* Run dispatch even if there were no readable fds to run timers */
if (aio_dispatch(ctx)) {
progress = true;
}
return progress;
assert(progress || busy);
return true;
}

View File

@@ -16,13 +16,14 @@
*/
#include "qemu-common.h"
#include "block/block.h"
#include "qemu/queue.h"
#include "qemu/sockets.h"
#include "block.h"
#include "qemu-queue.h"
#include "qemu_socket.h"
struct AioHandler {
EventNotifier *e;
EventNotifierHandler *io_notify;
AioFlushEventNotifierHandler *io_flush;
GPollFD pfd;
int deleted;
QLIST_ENTRY(AioHandler) node;
@@ -30,7 +31,8 @@ struct AioHandler {
void aio_set_event_notifier(AioContext *ctx,
EventNotifier *e,
EventNotifierHandler *io_notify)
EventNotifierHandler *io_notify,
AioFlushEventNotifierHandler *io_flush)
{
AioHandler *node;
@@ -71,6 +73,7 @@ void aio_set_event_notifier(AioContext *ctx,
}
/* Update handler with latest information */
node->io_notify = io_notify;
node->io_flush = io_flush;
}
aio_notify(ctx);
@@ -93,9 +96,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
{
AioHandler *node;
HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
bool progress;
bool busy, progress;
int count;
int timeout;
progress = false;
@@ -109,9 +111,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
progress = true;
}
/* Run timers */
progress |= timerlistgroup_run_timers(&ctx->tlg);
/*
* Then dispatch any pending callbacks from the GSource.
*
@@ -127,11 +126,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
if (node->pfd.revents && node->io_notify) {
node->pfd.revents = 0;
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
progress = true;
}
tmp = node;
@@ -152,8 +147,19 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers++;
/* fill fd sets */
busy = false;
count = 0;
QLIST_FOREACH(node, &ctx->aio_handlers, node) {
/* If there aren't pending AIO operations, don't invoke callbacks.
* Otherwise, if there are no AIO requests, qemu_aio_wait() would
* wait indefinitely.
*/
if (!node->deleted && node->io_flush) {
if (node->io_flush(node->e) == 0) {
continue;
}
busy = true;
}
if (!node->deleted && node->io_notify) {
events[count++] = event_notifier_get_handle(node->e);
}
@@ -161,13 +167,15 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
/* No AIO operations? Get us out of here */
if (!busy) {
return progress;
}
/* wait until next event */
while (count > 0) {
int ret;
timeout = blocking ?
qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg)) : 0;
ret = WaitForMultipleObjects(count, events, FALSE, timeout);
int timeout = blocking ? INFINITE : 0;
int ret = WaitForMultipleObjects(count, events, FALSE, timeout);
/* if we have any signaled events, dispatch event */
if ((DWORD) (ret - WAIT_OBJECT_0) >= count) {
@@ -188,11 +196,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
event_notifier_get_handle(node->e) == events[ret - WAIT_OBJECT_0] &&
node->io_notify) {
node->io_notify(node->e);
/* aio_notify() does not count as progress */
if (node->e != &ctx->notifier) {
progress = true;
}
progress = true;
}
tmp = node;
@@ -210,14 +214,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
events[ret - WAIT_OBJECT_0] = events[--count];
}
if (blocking) {
/* Run the timers a second time. We do this because otherwise aio_wait
* will not note progress - and will stop a drain early - if we have
* a timer that was not ready to run entering g_poll but is ready
* after g_poll. This will only do anything if a timer has expired.
*/
progress |= timerlistgroup_run_timers(&ctx->tlg);
}
return progress;
assert(progress || busy);
return true;
}

View File

@@ -20,7 +20,7 @@ along with this file; see the file COPYING. If not, see
<http://www.gnu.org/licenses/>. */
#include <stdio.h>
#include "disas/bfd.h"
#include "dis-asm.h"
/* MAX is redefined below, so remove any previous definition. */
#undef MAX

127
alpha.ld Normal file
View File

@@ -0,0 +1,127 @@
OUTPUT_FORMAT("elf64-alpha", "elf64-alpha",
"elf64-alpha")
OUTPUT_ARCH(alpha)
ENTRY(__start)
SECTIONS
{
/* Read-only sections, merged into text segment: */
. = 0x60000000 + SIZEOF_HEADERS;
.interp : { *(.interp) }
.hash : { *(.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.rel.text :
{ *(.rel.text) *(.rel.gnu.linkonce.t*) }
.rela.text :
{ *(.rela.text) *(.rela.gnu.linkonce.t*) }
.rel.data :
{ *(.rel.data) *(.rel.gnu.linkonce.d*) }
.rela.data :
{ *(.rela.data) *(.rela.gnu.linkonce.d*) }
.rel.rodata :
{ *(.rel.rodata) *(.rel.gnu.linkonce.r*) }
.rela.rodata :
{ *(.rela.rodata) *(.rela.gnu.linkonce.r*) }
.rel.got : { *(.rel.got) }
.rela.got : { *(.rela.got) }
.rel.ctors : { *(.rel.ctors) }
.rela.ctors : { *(.rela.ctors) }
.rel.dtors : { *(.rel.dtors) }
.rela.dtors : { *(.rela.dtors) }
.rel.init : { *(.rel.init) }
.rela.init : { *(.rela.init) }
.rel.fini : { *(.rel.fini) }
.rela.fini : { *(.rela.fini) }
.rel.bss : { *(.rel.bss) }
.rela.bss : { *(.rela.bss) }
.rel.plt : { *(.rel.plt) }
.rela.plt : { *(.rela.plt) }
.init : { *(.init) } =0x47ff041f
.text :
{
*(.text)
/* .gnu.warning sections are handled specially by elf32.em. */
*(.gnu.warning)
*(.gnu.linkonce.t*)
} =0x47ff041f
_etext = .;
PROVIDE (etext = .);
.fini : { *(.fini) } =0x47ff041f
.rodata : { *(.rodata) *(.gnu.linkonce.r*) }
.rodata1 : { *(.rodata1) }
.reginfo : { *(.reginfo) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(0x100000) + (. & (0x100000 - 1));
.data :
{
*(.data)
*(.gnu.linkonce.d*)
CONSTRUCTORS
}
.data1 : { *(.data1) }
.ctors :
{
*(.ctors)
}
.dtors :
{
*(.dtors)
}
.plt : { *(.plt) }
.got : { *(.got.plt) *(.got) }
.dynamic : { *(.dynamic) }
/* We want the small data sections together, so single-instruction offsets
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
__bss_start = .;
.sbss : { *(.sbss) *(.scommon) }
.bss :
{
*(.dynbss)
*(.bss)
*(COMMON)
}
_end = . ;
PROVIDE (end = .);
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
.stabstr 0 : { *(.stabstr) }
.stab.excl 0 : { *(.stab.excl) }
.stab.exclstr 0 : { *(.stab.exclstr) }
.stab.index 0 : { *(.stab.index) }
.stab.indexstr 0 : { *(.stab.indexstr) }
.comment 0 : { *(.comment) }
/* DWARF debug sections.
Symbols in the DWARF debugging sections are relative to the beginning
of the section so we begin them at 0. */
/* DWARF 1 */
.debug 0 : { *(.debug) }
.line 0 : { *(.line) }
/* GNU DWARF 1 extensions */
.debug_srcinfo 0 : { *(.debug_srcinfo) }
.debug_sfnames 0 : { *(.debug_sfnames) }
/* DWARF 1.1 and DWARF 2 */
.debug_aranges 0 : { *(.debug_aranges) }
.debug_pubnames 0 : { *(.debug_pubnames) }
/* DWARF 2 */
.debug_info 0 : { *(.debug_info) }
.debug_abbrev 0 : { *(.debug_abbrev) }
.debug_line 0 : { *(.debug_line) }
.debug_frame 0 : { *(.debug_frame) }
.debug_str 0 : { *(.debug_str) }
.debug_loc 0 : { *(.debug_loc) }
.debug_macinfo 0 : { *(.debug_macinfo) }
/* SGI/MIPS DWARF 2 extensions */
.debug_weaknames 0 : { *(.debug_weaknames) }
.debug_funcnames 0 : { *(.debug_funcnames) }
.debug_typenames 0 : { *(.debug_typenames) }
.debug_varnames 0 : { *(.debug_varnames) }
/* These must appear regardless of . */
}

File diff suppressed because it is too large Load Diff

View File

@@ -2,7 +2,6 @@
#define QEMU_ARCH_INIT_H
#include "qmp-commands.h"
#include "qemu/option.h"
enum {
QEMU_ARCH_ALL = -1,
@@ -21,17 +20,16 @@ enum {
QEMU_ARCH_XTENSA = 4096,
QEMU_ARCH_OPENRISC = 8192,
QEMU_ARCH_UNICORE32 = 0x4000,
QEMU_ARCH_MOXIE = 0x8000,
};
extern const uint32_t arch_type;
void select_soundhw(const char *optarg);
void do_acpitable_option(const QemuOpts *opts);
void do_smbios_option(QemuOpts *opts);
void ram_mig_init(void);
void do_acpitable_option(const char *optarg);
void do_smbios_option(const char *optarg);
void cpudef_init(void);
void audio_init(void);
int audio_available(void);
void audio_init(ISABus *isa_bus, PCIBus *pci_bus);
int tcg_available(void);
int kvm_available(void);
int xen_available(void);

View File

@@ -22,7 +22,7 @@
/* Start of qemu specific additions. Mostly this is stub definitions
for things we don't care about. */
#include "disas/bfd.h"
#include "dis-asm.h"
#define ATTRIBUTE_UNUSED __attribute__((unused))
#define ISSPACE(x) ((x) == ' ' || (x) == '\t' || (x) == '\n')
@@ -819,10 +819,6 @@ static const struct opcode32 arm_opcodes[] =
{ARM_EXT_V3M, 0x00800090, 0x0fa000f0, "%22?sumull%20's%c\t%12-15r, %16-19r, %0-3r, %8-11r"},
{ARM_EXT_V3M, 0x00a00090, 0x0fa000f0, "%22?sumlal%20's%c\t%12-15r, %16-19r, %0-3r, %8-11r"},
/* IDIV instructions. */
{ARM_EXT_DIV, 0x0710f010, 0x0ff0f0f0, "sdiv%c\t%16-19r, %0-3r, %8-11r"},
{ARM_EXT_DIV, 0x0730f010, 0x0ff0f0f0, "udiv%c\t%16-19r, %0-3r, %8-11r"},
/* V7 instructions. */
{ARM_EXT_V7, 0xf450f000, 0xfd70f000, "pli\t%P"},
{ARM_EXT_V7, 0x0320f0f0, 0x0ffffff0, "dbg%c\t#%0-3d"},

153
arm.ld Normal file
View File

@@ -0,0 +1,153 @@
OUTPUT_FORMAT("elf32-littlearm", "elf32-littlearm",
"elf32-littlearm")
OUTPUT_ARCH(arm)
ENTRY(_start)
SECTIONS
{
/* Read-only sections, merged into text segment: */
. = 0x60000000 + SIZEOF_HEADERS;
.interp : { *(.interp) }
.hash : { *(.hash) }
.dynsym : { *(.dynsym) }
.dynstr : { *(.dynstr) }
.gnu.version : { *(.gnu.version) }
.gnu.version_d : { *(.gnu.version_d) }
.gnu.version_r : { *(.gnu.version_r) }
.rel.text :
{ *(.rel.text) *(.rel.gnu.linkonce.t*) }
.rela.text :
{ *(.rela.text) *(.rela.gnu.linkonce.t*) }
.rel.data :
{ *(.rel.data) *(.rel.gnu.linkonce.d*) }
.rela.data :
{ *(.rela.data) *(.rela.gnu.linkonce.d*) }
.rel.rodata :
{ *(.rel.rodata) *(.rel.gnu.linkonce.r*) }
.rela.rodata :
{ *(.rela.rodata) *(.rela.gnu.linkonce.r*) }
.rel.got : { *(.rel.got) }
.rela.got : { *(.rela.got) }
.rel.ctors : { *(.rel.ctors) }
.rela.ctors : { *(.rela.ctors) }
.rel.dtors : { *(.rel.dtors) }
.rela.dtors : { *(.rela.dtors) }
.rel.init : { *(.rel.init) }
.rela.init : { *(.rela.init) }
.rel.fini : { *(.rel.fini) }
.rela.fini : { *(.rela.fini) }
.rel.bss : { *(.rel.bss) }
.rela.bss : { *(.rela.bss) }
.rel.plt : { *(.rel.plt) }
.rela.plt : { *(.rela.plt) }
.init : { *(.init) } =0x47ff041f
.text :
{
*(.text)
/* .gnu.warning sections are handled specially by elf32.em. */
*(.gnu.warning)
*(.gnu.linkonce.t*)
} =0x47ff041f
_etext = .;
PROVIDE (etext = .);
.fini : { *(.fini) } =0x47ff041f
.rodata : { *(.rodata) *(.gnu.linkonce.r*) }
.rodata1 : { *(.rodata1) }
.ARM.extab : { *(.ARM.extab* .gnu.linkonce.armextab.*) }
__exidx_start = .;
.ARM.exidx : { *(.ARM.exidx* .gnu.linkonce.armexidx.*) }
__exidx_end = .;
.reginfo : { *(.reginfo) }
/* Adjust the address for the data segment. We want to adjust up to
the same address within the page on the next page up. */
. = ALIGN(0x100000) + (. & (0x100000 - 1));
.data :
{
*(.gen_code)
*(.data)
*(.gnu.linkonce.d*)
CONSTRUCTORS
}
.tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) }
.data1 : { *(.data1) }
.preinit_array :
{
PROVIDE (__preinit_array_start = .);
KEEP (*(.preinit_array))
PROVIDE (__preinit_array_end = .);
}
.init_array :
{
PROVIDE (__init_array_start = .);
KEEP (*(SORT(.init_array.*)))
KEEP (*(.init_array))
PROVIDE (__init_array_end = .);
}
.fini_array :
{
PROVIDE (__fini_array_start = .);
KEEP (*(.fini_array))
KEEP (*(SORT(.fini_array.*)))
PROVIDE (__fini_array_end = .);
}
.ctors :
{
*(.ctors)
}
.dtors :
{
*(.dtors)
}
.plt : { *(.plt) }
.got : { *(.got.plt) *(.got) }
.dynamic : { *(.dynamic) }
/* We want the small data sections together, so single-instruction offsets
can access them all, and initialized data all before uninitialized, so
we can shorten the on-disk segment size. */
.sdata : { *(.sdata) }
_edata = .;
PROVIDE (edata = .);
__bss_start = .;
.sbss : { *(.sbss) *(.scommon) }
.bss :
{
*(.dynbss)
*(.bss)
*(COMMON)
}
_end = . ;
PROVIDE (end = .);
/* Stabs debugging sections. */
.stab 0 : { *(.stab) }
.stabstr 0 : { *(.stabstr) }
.stab.excl 0 : { *(.stab.excl) }
.stab.exclstr 0 : { *(.stab.exclstr) }
.stab.index 0 : { *(.stab.index) }
.stab.indexstr 0 : { *(.stab.indexstr) }
.comment 0 : { *(.comment) }
/* DWARF debug sections.
Symbols in the DWARF debugging sections are relative to the beginning
of the section so we begin them at 0. */
/* DWARF 1 */
.debug 0 : { *(.debug) }
.line 0 : { *(.line) }
/* GNU DWARF 1 extensions */
.debug_srcinfo 0 : { *(.debug_srcinfo) }
.debug_sfnames 0 : { *(.debug_sfnames) }
/* DWARF 1.1 and DWARF 2 */
.debug_aranges 0 : { *(.debug_aranges) }
.debug_pubnames 0 : { *(.debug_pubnames) }
/* DWARF 2 */
.debug_info 0 : { *(.debug_info) }
.debug_abbrev 0 : { *(.debug_abbrev) }
.debug_line 0 : { *(.debug_line) }
.debug_frame 0 : { *(.debug_frame) }
.debug_str 0 : { *(.debug_str) }
.debug_loc 0 : { *(.debug_loc) }
.debug_macinfo 0 : { *(.debug_macinfo) }
/* SGI/MIPS DWARF 2 extensions */
.debug_weaknames 0 : { *(.debug_weaknames) }
.debug_funcnames 0 : { *(.debug_funcnames) }
.debug_typenames 0 : { *(.debug_typenames) }
.debug_varnames 0 : { *(.debug_varnames) }
/* These must appear regardless of . */
}

91
async.c
View File

@@ -23,9 +23,8 @@
*/
#include "qemu-common.h"
#include "block/aio.h"
#include "block/thread-pool.h"
#include "qemu/main-loop.h"
#include "qemu-aio.h"
#include "main-loop.h"
/***********************************************************/
/* bottom halves (can be seen as timers which expire ASAP) */
@@ -47,16 +46,11 @@ QEMUBH *aio_bh_new(AioContext *ctx, QEMUBHFunc *cb, void *opaque)
bh->ctx = ctx;
bh->cb = cb;
bh->opaque = opaque;
qemu_mutex_lock(&ctx->bh_lock);
bh->next = ctx->first_bh;
/* Make sure that the members are ready before putting bh into list */
smp_wmb();
ctx->first_bh = bh;
qemu_mutex_unlock(&ctx->bh_lock);
return bh;
}
/* Multiple occurrences of aio_bh_poll cannot be called concurrently */
int aio_bh_poll(AioContext *ctx)
{
QEMUBH *bh, **bhp, *next;
@@ -66,15 +60,9 @@ int aio_bh_poll(AioContext *ctx)
ret = 0;
for (bh = ctx->first_bh; bh; bh = next) {
/* Make sure that fetching bh happens before accessing its members */
smp_read_barrier_depends();
next = bh->next;
if (!bh->deleted && bh->scheduled) {
bh->scheduled = 0;
/* Paired with write barrier in bh schedule to ensure reading for
* idle & callbacks coming after bh's scheduling.
*/
smp_rmb();
if (!bh->idle)
ret = 1;
bh->idle = 0;
@@ -86,7 +74,6 @@ int aio_bh_poll(AioContext *ctx)
/* remove deleted bhs */
if (!ctx->walking_bh) {
qemu_mutex_lock(&ctx->bh_lock);
bhp = &ctx->first_bh;
while (*bhp) {
bh = *bhp;
@@ -97,7 +84,6 @@ int aio_bh_poll(AioContext *ctx)
bhp = &bh->next;
}
}
qemu_mutex_unlock(&ctx->bh_lock);
}
return ret;
@@ -107,38 +93,24 @@ void qemu_bh_schedule_idle(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 1;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
bh->idle = 1;
}
void qemu_bh_schedule(QEMUBH *bh)
{
if (bh->scheduled)
return;
bh->idle = 0;
/* Make sure that idle & any writes needed by the callback are done
* before the locations are read in the aio_bh_poll.
*/
smp_wmb();
bh->scheduled = 1;
bh->idle = 0;
aio_notify(bh->ctx);
}
/* This func is async.
*/
void qemu_bh_cancel(QEMUBH *bh)
{
bh->scheduled = 0;
}
/* This func is async.The bottom half will do the delete action at the finial
* end.
*/
void qemu_bh_delete(QEMUBH *bh)
{
bh->scheduled = 0;
@@ -150,10 +122,7 @@ aio_ctx_prepare(GSource *source, gint *timeout)
{
AioContext *ctx = (AioContext *) source;
QEMUBH *bh;
int deadline;
/* We assume there is no timeout already supplied */
*timeout = -1;
for (bh = ctx->first_bh; bh; bh = bh->next) {
if (!bh->deleted && bh->scheduled) {
if (bh->idle) {
@@ -169,14 +138,6 @@ aio_ctx_prepare(GSource *source, gint *timeout)
}
}
deadline = qemu_timeout_ns_to_ms(timerlistgroup_deadline_ns(&ctx->tlg));
if (deadline == 0) {
*timeout = 0;
return true;
} else {
*timeout = qemu_soonest_timeout(*timeout, deadline);
}
return false;
}
@@ -191,7 +152,7 @@ aio_ctx_check(GSource *source)
return true;
}
}
return aio_pending(ctx) || (timerlistgroup_deadline_ns(&ctx->tlg) == 0);
return aio_pending(ctx);
}
static gboolean
@@ -211,13 +172,8 @@ aio_ctx_finalize(GSource *source)
{
AioContext *ctx = (AioContext *) source;
thread_pool_free(ctx->thread_pool);
aio_set_event_notifier(ctx, &ctx->notifier, NULL);
aio_set_event_notifier(ctx, &ctx->notifier, NULL, NULL);
event_notifier_cleanup(&ctx->notifier);
rfifolock_destroy(&ctx->lock);
qemu_mutex_destroy(&ctx->bh_lock);
g_array_free(ctx->pollfds, TRUE);
timerlistgroup_deinit(&ctx->tlg);
}
static GSourceFuncs aio_source_funcs = {
@@ -233,43 +189,19 @@ GSource *aio_get_g_source(AioContext *ctx)
return &ctx->source;
}
ThreadPool *aio_get_thread_pool(AioContext *ctx)
{
if (!ctx->thread_pool) {
ctx->thread_pool = thread_pool_new(ctx);
}
return ctx->thread_pool;
}
void aio_notify(AioContext *ctx)
{
event_notifier_set(&ctx->notifier);
}
static void aio_timerlist_notify(void *opaque)
{
aio_notify(opaque);
}
static void aio_rfifolock_cb(void *opaque)
{
/* Kick owner thread in case they are blocked in aio_poll() */
aio_notify(opaque);
}
AioContext *aio_context_new(void)
{
AioContext *ctx;
ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
ctx->thread_pool = NULL;
qemu_mutex_init(&ctx->bh_lock);
rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
event_notifier_init(&ctx->notifier, false);
aio_set_event_notifier(ctx, &ctx->notifier,
(EventNotifierHandler *)
event_notifier_test_and_clear);
timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx);
event_notifier_test_and_clear, NULL);
return ctx;
}
@@ -284,12 +216,7 @@ void aio_context_unref(AioContext *ctx)
g_source_unref(&ctx->source);
}
void aio_context_acquire(AioContext *ctx)
void aio_flush(AioContext *ctx)
{
rfifolock_lock(&ctx->lock);
}
void aio_context_release(AioContext *ctx)
{
rfifolock_unlock(&ctx->lock);
while (aio_poll(ctx, true));
}

View File

@@ -12,6 +12,3 @@ common-obj-$(CONFIG_WINWAVE) += winwaveaudio.o
common-obj-$(CONFIG_AUDIO_PT_INT) += audio_pt_int.o
common-obj-$(CONFIG_AUDIO_WIN_INT) += audio_win_int.o
common-obj-y += wavcapture.o
$(obj)/audio.o $(obj)/fmodaudio.o: QEMU_CFLAGS += $(FMOD_CFLAGS)
sdlaudio.o-cflags := $(SDL_CFLAGS)

View File

@@ -23,7 +23,7 @@
*/
#include <alsa/asoundlib.h>
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu-char.h"
#include "audio.h"
#if QEMU_GNUC_PREREQ(4, 3)

View File

@@ -23,9 +23,9 @@
*/
#include "hw/hw.h"
#include "audio.h"
#include "monitor/monitor.h"
#include "qemu/timer.h"
#include "sysemu/sysemu.h"
#include "monitor.h"
#include "qemu-timer.h"
#include "sysemu.h"
#define AUDIO_CAP "audio"
#include "audio_int.h"
@@ -95,7 +95,7 @@ static struct {
}
},
.period = { .hertz = 100 },
.period = { .hertz = 250 },
.plive = 0,
.log_to_monitor = 0,
.try_poll_in = 1,
@@ -828,9 +828,8 @@ static int audio_attach_capture (HWVoiceOut *hw)
QLIST_INSERT_HEAD (&hw_cap->sw_head, sw, entries);
QLIST_INSERT_HEAD (&hw->cap_head, sc, entries);
#ifdef DEBUG_CAPTURE
sw->name = g_strdup_printf ("for %p %d,%d,%d",
hw, sw->info.freq, sw->info.bits,
sw->info.nchannels);
asprintf (&sw->name, "for %p %d,%d,%d",
hw, sw->info.freq, sw->info.bits, sw->info.nchannels);
dolog ("Added %s active = %d\n", sw->name, sw->active);
#endif
if (sw->active) {
@@ -1124,11 +1123,10 @@ static int audio_is_timer_needed (void)
static void audio_reset_timer (AudioState *s)
{
if (audio_is_timer_needed ()) {
timer_mod (s->ts,
qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + conf.period.ticks);
qemu_mod_timer (s->ts, qemu_get_clock_ns (vm_clock) + 1);
}
else {
timer_del (s->ts);
qemu_del_timer (s->ts);
}
}
@@ -1812,7 +1810,8 @@ static const VMStateDescription vmstate_audio = {
.name = "audio",
.version_id = 1,
.minimum_version_id = 1,
.fields = (VMStateField[]) {
.minimum_version_id_old = 1,
.fields = (VMStateField []) {
VMSTATE_END_OF_LIST()
}
};
@@ -1834,7 +1833,7 @@ static void audio_init (void)
QLIST_INIT (&s->cap_head);
atexit (audio_atexit);
s->ts = timer_new_ns(QEMU_CLOCK_VIRTUAL, audio_timer, s);
s->ts = qemu_new_timer_ns (vm_clock, audio_timer, s);
if (!s->ts) {
hw_error("Could not create audio timer\n");
}

View File

@@ -25,7 +25,7 @@
#define QEMU_AUDIO_H
#include "config-host.h"
#include "qemu/queue.h"
#include "qemu-queue.h"
typedef void (*audio_callback_fn) (void *opaque, int avail);

View File

@@ -243,13 +243,38 @@ static inline int audio_ring_dist (int dst, int src, int len)
return (dst >= src) ? (dst - src) : (len - src + dst);
}
#define dolog(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
static void GCC_ATTR dolog (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#ifdef DEBUG
#define ldebug(fmt, ...) AUD_log(AUDIO_CAP, fmt, ## __VA_ARGS__)
static void GCC_ATTR ldebug (const char *fmt, ...)
{
va_list ap;
va_start (ap, fmt);
AUD_vlog (AUDIO_CAP, fmt, ap);
va_end (ap);
}
#else
#define ldebug(fmt, ...) (void)0
#if defined NDEBUG && defined __GNUC__
#define ldebug(...)
#elif defined NDEBUG && defined _MSC_VER
#define ldebug __noop
#else
static void GCC_ATTR ldebug (const char *fmt, ...)
{
(void) fmt;
}
#endif
#endif
#undef GCC_ATTR
#define AUDIO_STRINGIFY_(n) #n
#define AUDIO_STRINGIFY(n) AUDIO_STRINGIFY_(n)

View File

@@ -1,6 +1,7 @@
/* public domain */
#include "qemu-common.h"
#include "audio.h"
#define AUDIO_CAP "win-int"
#include <windows.h>

View File

@@ -348,6 +348,7 @@ void mixeng_clear (struct st_sample *buf, int len)
void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol)
{
#ifdef CONFIG_MIXEMU
if (vol->mute) {
mixeng_clear (buf, len);
return;
@@ -363,4 +364,9 @@ void mixeng_volume (struct st_sample *buf, int len, struct mixeng_volume *vol)
#endif
buf += 1;
}
#else
(void) buf;
(void) len;
(void) vol;
#endif
}

View File

@@ -35,7 +35,7 @@
#define IN_T glue (glue (ITYPE, BSIZE), _t)
#ifdef FLOAT_MIXENG
static inline mixeng_real glue (conv_, ET) (IN_T v)
static mixeng_real inline glue (conv_, ET) (IN_T v)
{
IN_T nv = ENDIAN_CONVERT (v);
@@ -54,7 +54,7 @@ static inline mixeng_real glue (conv_, ET) (IN_T v)
#endif
}
static inline IN_T glue (clip_, ET) (mixeng_real v)
static IN_T inline glue (clip_, ET) (mixeng_real v)
{
if (v >= 0.5) {
return IN_MAX;

View File

@@ -23,7 +23,7 @@
*/
#include "qemu-common.h"
#include "audio.h"
#include "qemu/timer.h"
#include "qemu-timer.h"
#define AUDIO_CAP "noaudio"
#include "audio_int.h"
@@ -46,7 +46,7 @@ static int no_run_out (HWVoiceOut *hw, int live)
int64_t ticks;
int64_t bytes;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
now = qemu_get_clock_ns (vm_clock);
ticks = now - no->old_ticks;
bytes = muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());
bytes = audio_MIN (bytes, INT_MAX);
@@ -102,7 +102,7 @@ static int no_run_in (HWVoiceIn *hw)
int samples = 0;
if (dead) {
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock_ns (vm_clock);
int64_t ticks = now - no->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());

View File

@@ -25,10 +25,14 @@
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/ioctl.h>
#ifdef __OpenBSD__
#include <soundcard.h>
#else
#include <sys/soundcard.h>
#endif
#include "qemu-common.h"
#include "qemu/main-loop.h"
#include "qemu/host-utils.h"
#include "host-utils.h"
#include "qemu-char.h"
#include "audio.h"
#define AUDIO_CAP "oss"
@@ -849,10 +853,6 @@ static int oss_ctl_in (HWVoiceIn *hw, int cmd, ...)
static void *oss_audio_init (void)
{
if (access(conf.devpath_in, R_OK | W_OK) < 0 ||
access(conf.devpath_out, R_OK | W_OK) < 0) {
return NULL;
}
return &conf;
}

View File

@@ -547,11 +547,11 @@ static int qpa_init_out (HWVoiceOut *hw, struct audsettings *as)
ss.rate = as->freq;
/*
* qemu audio tick runs at 100 Hz (by default), so processing
* data chunks worth 10 ms of sound should be a good fit.
* qemu audio tick runs at 250 Hz (by default), so processing
* data chunks worth 4 ms of sound should be a good fit.
*/
ba.tlength = pa_usec_to_bytes (10 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (5 * 1000, &ss);
ba.tlength = pa_usec_to_bytes (4 * 1000, &ss);
ba.minreq = pa_usec_to_bytes (2 * 1000, &ss);
ba.maxlength = -1;
ba.prebuf = -1;

View File

@@ -18,24 +18,15 @@
*/
#include "hw/hw.h"
#include "qemu/timer.h"
#include "qemu-timer.h"
#include "ui/qemu-spice.h"
#define AUDIO_CAP "spice"
#include "audio.h"
#include "audio_int.h"
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
#define LINE_OUT_SAMPLES (480 * 4)
#else
#define LINE_OUT_SAMPLES (256 * 4)
#endif
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
#define LINE_IN_SAMPLES (480 * 4)
#else
#define LINE_IN_SAMPLES (256 * 4)
#endif
#define LINE_IN_SAMPLES 1024
#define LINE_OUT_SAMPLES 1024
typedef struct SpiceRateCtl {
int64_t start_ticks;
@@ -90,7 +81,7 @@ static void spice_audio_fini (void *opaque)
static void rate_start (SpiceRateCtl *rate)
{
memset (rate, 0, sizeof (*rate));
rate->start_ticks = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
rate->start_ticks = qemu_get_clock_ns (vm_clock);
}
static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
@@ -100,7 +91,7 @@ static int rate_get_samples (struct audio_pcm_info *info, SpiceRateCtl *rate)
int64_t bytes;
int64_t samples;
now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
now = qemu_get_clock_ns (vm_clock);
ticks = now - rate->start_ticks;
bytes = muldiv64 (ticks, info->bytes_per_second, get_ticks_per_sec ());
samples = (bytes - rate->bytes_sent) >> info->shift;
@@ -120,11 +111,7 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
SpiceVoiceOut *out = container_of (hw, SpiceVoiceOut, hw);
struct audsettings settings;
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
settings.freq = spice_server_get_best_playback_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_PLAYBACK_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_PLAYBACK_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -135,9 +122,6 @@ static int line_out_init (HWVoiceOut *hw, struct audsettings *as)
out->sin.base.sif = &playback_sif.base;
qemu_spice_add_interface (&out->sin.base);
#if SPICE_INTERFACE_PLAYBACK_MAJOR > 1 || SPICE_INTERFACE_PLAYBACK_MINOR >= 3
spice_server_set_playback_rate(&out->sin, settings.freq);
#endif
return 0;
}
@@ -248,11 +232,7 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
SpiceVoiceIn *in = container_of (hw, SpiceVoiceIn, hw);
struct audsettings settings;
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
settings.freq = spice_server_get_best_record_rate(NULL);
#else
settings.freq = SPICE_INTERFACE_RECORD_FREQ;
#endif
settings.nchannels = SPICE_INTERFACE_RECORD_CHAN;
settings.fmt = AUD_FMT_S16;
settings.endianness = AUDIO_HOST_ENDIANNESS;
@@ -263,9 +243,6 @@ static int line_in_init (HWVoiceIn *hw, struct audsettings *as)
in->sin.base.sif = &record_sif.base;
qemu_spice_add_interface (&in->sin.base);
#if SPICE_INTERFACE_RECORD_MAJOR > 2 || SPICE_INTERFACE_RECORD_MINOR >= 3
spice_server_set_record_rate(&in->sin, settings.freq);
#endif
return 0;
}

View File

@@ -22,7 +22,7 @@
* THE SOFTWARE.
*/
#include "hw/hw.h"
#include "qemu/timer.h"
#include "qemu-timer.h"
#include "audio.h"
#define AUDIO_CAP "wav"
@@ -52,7 +52,7 @@ static int wav_run_out (HWVoiceOut *hw, int live)
int rpos, decr, samples;
uint8_t *dst;
struct st_sample *src;
int64_t now = qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL);
int64_t now = qemu_get_clock_ns (vm_clock);
int64_t ticks = now - wav->old_ticks;
int64_t bytes =
muldiv64 (ticks, hw->info.bytes_per_second, get_ticks_per_sec ());

View File

@@ -1,5 +1,5 @@
#include "hw/hw.h"
#include "monitor/monitor.h"
#include "monitor.h"
#include "audio.h"
typedef struct {

View File

@@ -1,7 +1,7 @@
/* public domain */
#include "qemu-common.h"
#include "sysemu/sysemu.h"
#include "sysemu.h"
#include "audio.h"
#define AUDIO_CAP "winwave"

View File

@@ -1,8 +1,2 @@
common-obj-y += rng.o rng-egd.o
common-obj-$(CONFIG_POSIX) += rng-random.o
common-obj-y += msmouse.o
common-obj-$(CONFIG_BRLAPI) += baum.o
baum.o-cflags := $(SDL_CFLAGS)
common-obj-$(CONFIG_TPM) += tpm.o

View File

@@ -10,9 +10,9 @@
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng.h"
#include "sysemu/char.h"
#include "qapi/qmp/qerror.h"
#include "qemu/rng.h"
#include "qemu-char.h"
#include "qerror.h"
#include "hw/qdev.h" /* just for DEFINE_PROP_CHR */
#define TYPE_RNG_EGD "rng-egd"
@@ -91,14 +91,12 @@ static int rng_egd_chr_can_read(void *opaque)
static void rng_egd_chr_read(void *opaque, const uint8_t *buf, int size)
{
RngEgd *s = RNG_EGD(opaque);
size_t buf_offset = 0;
while (size > 0 && s->requests) {
RngRequest *req = s->requests->data;
int len = MIN(size, req->size - req->offset);
memcpy(req->data + req->offset, buf + buf_offset, len);
buf_offset += len;
memcpy(req->data + req->offset, buf, len);
req->offset += len;
size -= len;
@@ -151,11 +149,6 @@ static void rng_egd_opened(RngBackend *b, Error **errp)
return;
}
if (qemu_chr_fe_claim(s->chr) != 0) {
error_set(errp, QERR_DEVICE_IN_USE, s->chr_name);
return;
}
/* FIXME we should resubmit pending requests when the CDS reconnects. */
qemu_chr_add_handlers(s->chr, rng_egd_chr_can_read, rng_egd_chr_read,
NULL, s);
@@ -169,6 +162,7 @@ static void rng_egd_set_chardev(Object *obj, const char *value, Error **errp)
if (b->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
} else {
g_free(s->chr_name);
s->chr_name = g_strdup(value);
}
}
@@ -197,7 +191,6 @@ static void rng_egd_finalize(Object *obj)
if (s->chr) {
qemu_chr_add_handlers(s->chr, NULL, NULL, NULL, NULL);
qemu_chr_fe_release(s->chr);
}
g_free(s->chr_name);
@@ -214,7 +207,7 @@ static void rng_egd_class_init(ObjectClass *klass, void *data)
rbc->opened = rng_egd_opened;
}
static const TypeInfo rng_egd_info = {
static TypeInfo rng_egd_info = {
.name = TYPE_RNG_EGD,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RngEgd),

View File

@@ -10,10 +10,10 @@
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng-random.h"
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qemu/main-loop.h"
#include "qemu/rng-random.h"
#include "qemu/rng.h"
#include "qerror.h"
#include "main-loop.h"
struct RndRandom
{
@@ -41,9 +41,6 @@ static void entropy_available(void *opaque)
ssize_t len;
len = read(s->fd, buffer, s->size);
if (len < 0 && errno == EAGAIN) {
return;
}
g_assert(len != -1);
s->receive_func(s->opaque, buffer, len);
@@ -77,9 +74,10 @@ static void rng_random_opened(RngBackend *b, Error **errp)
error_set(errp, QERR_INVALID_PARAMETER_VALUE,
"filename", "a valid filename");
} else {
s->fd = qemu_open(s->filename, O_RDONLY | O_NONBLOCK);
s->fd = open(s->filename, O_RDONLY | O_NONBLOCK);
if (s->fd == -1) {
error_setg_file_open(errp, errno, s->filename);
error_set(errp, QERR_OPEN_FILE_FAILED, s->filename);
}
}
}
@@ -123,16 +121,16 @@ static void rng_random_init(Object *obj)
NULL);
s->filename = g_strdup("/dev/random");
s->fd = -1;
}
static void rng_random_finalize(Object *obj)
{
RndRandom *s = RNG_RANDOM(obj);
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
if (s->fd != -1) {
qemu_set_fd_handler(s->fd, NULL, NULL, NULL);
qemu_close(s->fd);
close(s->fd);
}
g_free(s->filename);
@@ -146,7 +144,7 @@ static void rng_random_class_init(ObjectClass *klass, void *data)
rbc->opened = rng_random_opened;
}
static const TypeInfo rng_random_info = {
static TypeInfo rng_random_info = {
.name = TYPE_RNG_RANDOM,
.parent = TYPE_RNG_BACKEND,
.instance_size = sizeof(RndRandom),

View File

@@ -10,9 +10,8 @@
* See the COPYING file in the top-level directory.
*/
#include "sysemu/rng.h"
#include "qapi/qmp/qerror.h"
#include "qom/object_interfaces.h"
#include "qemu/rng.h"
#include "qerror.h"
void rng_backend_request_entropy(RngBackend *s, size_t size,
EntropyReceiveFunc *receive_entropy,
@@ -41,16 +40,15 @@ static bool rng_backend_prop_get_opened(Object *obj, Error **errp)
return s->opened;
}
static void rng_backend_complete(UserCreatable *uc, Error **errp)
void rng_backend_open(RngBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(uc), true, "opened", errp);
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
RngBackend *s = RNG_BACKEND(obj);
RngBackendClass *k = RNG_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
@@ -62,14 +60,12 @@ static void rng_backend_prop_set_opened(Object *obj, bool value, Error **errp)
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
k->opened(s, errp);
}
s->opened = true;
if (!error_is_set(errp)) {
s->opened = value;
}
}
static void rng_backend_init(Object *obj)
@@ -80,25 +76,13 @@ static void rng_backend_init(Object *obj)
NULL);
}
static void rng_backend_class_init(ObjectClass *oc, void *data)
{
UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
ucc->complete = rng_backend_complete;
}
static const TypeInfo rng_backend_info = {
static TypeInfo rng_backend_info = {
.name = TYPE_RNG_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(RngBackend),
.instance_init = rng_backend_init,
.class_size = sizeof(RngBackendClass),
.class_init = rng_backend_class_init,
.abstract = true,
.interfaces = (InterfaceInfo[]) {
{ TYPE_USER_CREATABLE },
{ }
}
};
static void register_types(void)

View File

@@ -1,193 +0,0 @@
/*
* QEMU TPM Backend
*
* Copyright IBM, Corp. 2013
*
* Authors:
* Stefan Berger <stefanb@us.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
* Based on backends/rng.c by Anthony Liguori
*/
#include "sysemu/tpm_backend.h"
#include "qapi/qmp/qerror.h"
#include "sysemu/tpm.h"
#include "qemu/thread.h"
#include "sysemu/tpm_backend_int.h"
enum TpmType tpm_backend_get_type(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->type;
}
const char *tpm_backend_get_desc(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->desc();
}
void tpm_backend_destroy(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->destroy(s);
}
int tpm_backend_init(TPMBackend *s, TPMState *state,
TPMRecvDataCB *datacb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->init(s, state, datacb);
}
int tpm_backend_startup_tpm(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->startup_tpm(s);
}
bool tpm_backend_had_startup_error(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->had_startup_error(s);
}
size_t tpm_backend_realloc_buffer(TPMBackend *s, TPMSizedBuffer *sb)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->realloc_buffer(sb);
}
void tpm_backend_deliver_request(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->deliver_request(s);
}
void tpm_backend_reset(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->reset(s);
}
void tpm_backend_cancel_cmd(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
k->ops->cancel_cmd(s);
}
bool tpm_backend_get_tpm_established_flag(TPMBackend *s)
{
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
return k->ops->get_tpm_established_flag(s);
}
static bool tpm_backend_prop_get_opened(Object *obj, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
return s->opened;
}
void tpm_backend_open(TPMBackend *s, Error **errp)
{
object_property_set_bool(OBJECT(s), true, "opened", errp);
}
static void tpm_backend_prop_set_opened(Object *obj, bool value, Error **errp)
{
TPMBackend *s = TPM_BACKEND(obj);
TPMBackendClass *k = TPM_BACKEND_GET_CLASS(s);
Error *local_err = NULL;
if (value == s->opened) {
return;
}
if (!value && s->opened) {
error_set(errp, QERR_PERMISSION_DENIED);
return;
}
if (k->opened) {
k->opened(s, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
}
s->opened = true;
}
static void tpm_backend_instance_init(Object *obj)
{
object_property_add_bool(obj, "opened",
tpm_backend_prop_get_opened,
tpm_backend_prop_set_opened,
NULL);
}
void tpm_backend_thread_deliver_request(TPMBackendThread *tbt)
{
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_PROCESS_CMD, NULL);
}
void tpm_backend_thread_create(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tbt->pool = g_thread_pool_new(func, user_data, 1, TRUE, NULL);
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_INIT, NULL);
}
}
void tpm_backend_thread_end(TPMBackendThread *tbt)
{
if (tbt->pool) {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_END, NULL);
g_thread_pool_free(tbt->pool, FALSE, TRUE);
tbt->pool = NULL;
}
}
void tpm_backend_thread_tpm_reset(TPMBackendThread *tbt,
GFunc func, gpointer user_data)
{
if (!tbt->pool) {
tpm_backend_thread_create(tbt, func, user_data);
} else {
g_thread_pool_push(tbt->pool, (gpointer)TPM_BACKEND_CMD_TPM_RESET,
NULL);
}
}
static const TypeInfo tpm_backend_info = {
.name = TYPE_TPM_BACKEND,
.parent = TYPE_OBJECT,
.instance_size = sizeof(TPMBackend),
.instance_init = tpm_backend_instance_init,
.class_size = sizeof(TPMBackendClass),
.abstract = true,
};
static void register_types(void)
{
type_register_static(&tpm_backend_info);
}
type_init(register_types);

View File

@@ -24,13 +24,13 @@
* THE SOFTWARE.
*/
#include "monitor/monitor.h"
#include "exec/cpu-common.h"
#include "sysemu/kvm.h"
#include "sysemu/balloon.h"
#include "monitor.h"
#include "cpu-common.h"
#include "kvm.h"
#include "balloon.h"
#include "trace.h"
#include "qmp-commands.h"
#include "qapi/qmp/qjson.h"
#include "qjson.h"
static QEMUBalloonEvent *balloon_event_fn;
static QEMUBalloonStatus *balloon_stat_fn;

View File

@@ -14,7 +14,7 @@
#ifndef _QEMU_BALLOON_H
#define _QEMU_BALLOON_H
#include "monitor/monitor.h"
#include "monitor.h"
#include "qapi-types.h"
typedef void (QEMUBalloonEvent)(void *opaque, ram_addr_t target);

View File

@@ -9,8 +9,8 @@
* Version 2.
*/
#include "qemu/bitops.h"
#include "qemu/bitmap.h"
#include "bitops.h"
#include "bitmap.h"
/*
* bitmaps provide an array of bits, implemented using an an
@@ -36,9 +36,9 @@
* endian architectures.
*/
int slow_bitmap_empty(const unsigned long *bitmap, long bits)
int slow_bitmap_empty(const unsigned long *bitmap, int bits)
{
long k, lim = bits/BITS_PER_LONG;
int k, lim = bits/BITS_PER_LONG;
for (k = 0; k < lim; ++k) {
if (bitmap[k]) {
@@ -54,9 +54,9 @@ int slow_bitmap_empty(const unsigned long *bitmap, long bits)
return 1;
}
int slow_bitmap_full(const unsigned long *bitmap, long bits)
int slow_bitmap_full(const unsigned long *bitmap, int bits)
{
long k, lim = bits/BITS_PER_LONG;
int k, lim = bits/BITS_PER_LONG;
for (k = 0; k < lim; ++k) {
if (~bitmap[k]) {
@@ -74,9 +74,9 @@ int slow_bitmap_full(const unsigned long *bitmap, long bits)
}
int slow_bitmap_equal(const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k, lim = bits/BITS_PER_LONG;
int k, lim = bits/BITS_PER_LONG;
for (k = 0; k < lim; ++k) {
if (bitmap1[k] != bitmap2[k]) {
@@ -94,9 +94,9 @@ int slow_bitmap_equal(const unsigned long *bitmap1,
}
void slow_bitmap_complement(unsigned long *dst, const unsigned long *src,
long bits)
int bits)
{
long k, lim = bits/BITS_PER_LONG;
int k, lim = bits/BITS_PER_LONG;
for (k = 0; k < lim; ++k) {
dst[k] = ~src[k];
@@ -108,10 +108,10 @@ void slow_bitmap_complement(unsigned long *dst, const unsigned long *src,
}
int slow_bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k;
long nr = BITS_TO_LONGS(bits);
int k;
int nr = BITS_TO_LONGS(bits);
unsigned long result = 0;
for (k = 0; k < nr; k++) {
@@ -121,10 +121,10 @@ int slow_bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
}
void slow_bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k;
long nr = BITS_TO_LONGS(bits);
int k;
int nr = BITS_TO_LONGS(bits);
for (k = 0; k < nr; k++) {
dst[k] = bitmap1[k] | bitmap2[k];
@@ -132,10 +132,10 @@ void slow_bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
}
void slow_bitmap_xor(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k;
long nr = BITS_TO_LONGS(bits);
int k;
int nr = BITS_TO_LONGS(bits);
for (k = 0; k < nr; k++) {
dst[k] = bitmap1[k] ^ bitmap2[k];
@@ -143,10 +143,10 @@ void slow_bitmap_xor(unsigned long *dst, const unsigned long *bitmap1,
}
int slow_bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k;
long nr = BITS_TO_LONGS(bits);
int k;
int nr = BITS_TO_LONGS(bits);
unsigned long result = 0;
for (k = 0; k < nr; k++) {
@@ -157,10 +157,10 @@ int slow_bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
#define BITMAP_FIRST_WORD_MASK(start) (~0UL << ((start) % BITS_PER_LONG))
void bitmap_set(unsigned long *map, long start, long nr)
void bitmap_set(unsigned long *map, int start, int nr)
{
unsigned long *p = map + BIT_WORD(start);
const long size = start + nr;
const int size = start + nr;
int bits_to_set = BITS_PER_LONG - (start % BITS_PER_LONG);
unsigned long mask_to_set = BITMAP_FIRST_WORD_MASK(start);
@@ -177,10 +177,10 @@ void bitmap_set(unsigned long *map, long start, long nr)
}
}
void bitmap_clear(unsigned long *map, long start, long nr)
void bitmap_clear(unsigned long *map, int start, int nr)
{
unsigned long *p = map + BIT_WORD(start);
const long size = start + nr;
const int size = start + nr;
int bits_to_clear = BITS_PER_LONG - (start % BITS_PER_LONG);
unsigned long mask_to_clear = BITMAP_FIRST_WORD_MASK(start);
@@ -212,10 +212,10 @@ void bitmap_clear(unsigned long *map, long start, long nr)
* power of 2. A @align_mask of 0 means no alignment is required.
*/
unsigned long bitmap_find_next_zero_area(unsigned long *map,
unsigned long size,
unsigned long start,
unsigned long nr,
unsigned long align_mask)
unsigned long size,
unsigned long start,
unsigned int nr,
unsigned long align_mask)
{
unsigned long index, end, i;
again:
@@ -237,9 +237,9 @@ again:
}
int slow_bitmap_intersects(const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits)
const unsigned long *bitmap2, int bits)
{
long k, lim = bits/BITS_PER_LONG;
int k, lim = bits/BITS_PER_LONG;
for (k = 0; k < lim; ++k) {
if (bitmap1[k] & bitmap2[k]) {

View File

@@ -13,7 +13,7 @@
#define BITMAP_H
#include "qemu-common.h"
#include "qemu/bitops.h"
#include "bitops.h"
/*
* The available bitmap operations and their rough meaning in the
@@ -31,7 +31,7 @@
* bitmap_andnot(dst, src1, src2, nbits) *dst = *src1 & ~(*src2)
* bitmap_complement(dst, src, nbits) *dst = ~(*src)
* bitmap_equal(src1, src2, nbits) Are *src1 and *src2 equal?
* bitmap_intersects(src1, src2, nbits) Do *src1 and *src2 overlap?
* bitmap_intersects(src1, src2, nbits) Do *src1 and *src2 overlap?
* bitmap_empty(src, nbits) Are all bits zero in *src?
* bitmap_full(src, nbits) Are all bits set in *src?
* bitmap_set(dst, pos, nbits) Set specified bit area
@@ -62,71 +62,71 @@
)
#define DECLARE_BITMAP(name,bits) \
unsigned long name[BITS_TO_LONGS(bits)]
unsigned long name[BITS_TO_LONGS(bits)]
#define small_nbits(nbits) \
((nbits) <= BITS_PER_LONG)
((nbits) <= BITS_PER_LONG)
int slow_bitmap_empty(const unsigned long *bitmap, long bits);
int slow_bitmap_full(const unsigned long *bitmap, long bits);
int slow_bitmap_empty(const unsigned long *bitmap, int bits);
int slow_bitmap_full(const unsigned long *bitmap, int bits);
int slow_bitmap_equal(const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
void slow_bitmap_complement(unsigned long *dst, const unsigned long *src,
long bits);
int bits);
void slow_bitmap_shift_right(unsigned long *dst,
const unsigned long *src, int shift, long bits);
const unsigned long *src, int shift, int bits);
void slow_bitmap_shift_left(unsigned long *dst,
const unsigned long *src, int shift, long bits);
const unsigned long *src, int shift, int bits);
int slow_bitmap_and(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
void slow_bitmap_or(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
void slow_bitmap_xor(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
int slow_bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
int slow_bitmap_intersects(const unsigned long *bitmap1,
const unsigned long *bitmap2, long bits);
const unsigned long *bitmap2, int bits);
static inline unsigned long *bitmap_new(long nbits)
static inline unsigned long *bitmap_new(int nbits)
{
long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
return g_malloc0(len);
}
static inline void bitmap_zero(unsigned long *dst, long nbits)
static inline void bitmap_zero(unsigned long *dst, int nbits)
{
if (small_nbits(nbits)) {
*dst = 0UL;
} else {
long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
memset(dst, 0, len);
}
}
static inline void bitmap_fill(unsigned long *dst, long nbits)
static inline void bitmap_fill(unsigned long *dst, int nbits)
{
size_t nlongs = BITS_TO_LONGS(nbits);
if (!small_nbits(nbits)) {
long len = (nlongs - 1) * sizeof(unsigned long);
int len = (nlongs - 1) * sizeof(unsigned long);
memset(dst, 0xff, len);
}
dst[nlongs - 1] = BITMAP_LAST_WORD_MASK(nbits);
}
static inline void bitmap_copy(unsigned long *dst, const unsigned long *src,
long nbits)
int nbits)
{
if (small_nbits(nbits)) {
*dst = *src;
} else {
long len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
int len = BITS_TO_LONGS(nbits) * sizeof(unsigned long);
memcpy(dst, src, len);
}
}
static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
return (*dst = *src1 & *src2) != 0;
@@ -135,7 +135,7 @@ static inline int bitmap_and(unsigned long *dst, const unsigned long *src1,
}
static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
*dst = *src1 | *src2;
@@ -145,7 +145,7 @@ static inline void bitmap_or(unsigned long *dst, const unsigned long *src1,
}
static inline void bitmap_xor(unsigned long *dst, const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
*dst = *src1 ^ *src2;
@@ -155,7 +155,7 @@ static inline void bitmap_xor(unsigned long *dst, const unsigned long *src1,
}
static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
return (*dst = *src1 & ~(*src2)) != 0;
@@ -163,9 +163,8 @@ static inline int bitmap_andnot(unsigned long *dst, const unsigned long *src1,
return slow_bitmap_andnot(dst, src1, src2, nbits);
}
static inline void bitmap_complement(unsigned long *dst,
const unsigned long *src,
long nbits)
static inline void bitmap_complement(unsigned long *dst, const unsigned long *src,
int nbits)
{
if (small_nbits(nbits)) {
*dst = ~(*src) & BITMAP_LAST_WORD_MASK(nbits);
@@ -175,7 +174,7 @@ static inline void bitmap_complement(unsigned long *dst,
}
static inline int bitmap_equal(const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
return ! ((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
@@ -184,7 +183,7 @@ static inline int bitmap_equal(const unsigned long *src1,
}
}
static inline int bitmap_empty(const unsigned long *src, long nbits)
static inline int bitmap_empty(const unsigned long *src, int nbits)
{
if (small_nbits(nbits)) {
return ! (*src & BITMAP_LAST_WORD_MASK(nbits));
@@ -193,7 +192,7 @@ static inline int bitmap_empty(const unsigned long *src, long nbits)
}
}
static inline int bitmap_full(const unsigned long *src, long nbits)
static inline int bitmap_full(const unsigned long *src, int nbits)
{
if (small_nbits(nbits)) {
return ! (~(*src) & BITMAP_LAST_WORD_MASK(nbits));
@@ -203,7 +202,7 @@ static inline int bitmap_full(const unsigned long *src, long nbits)
}
static inline int bitmap_intersects(const unsigned long *src1,
const unsigned long *src2, long nbits)
const unsigned long *src2, int nbits)
{
if (small_nbits(nbits)) {
return ((*src1 & *src2) & BITMAP_LAST_WORD_MASK(nbits)) != 0;
@@ -212,21 +211,12 @@ static inline int bitmap_intersects(const unsigned long *src1,
}
}
void bitmap_set(unsigned long *map, long i, long len);
void bitmap_clear(unsigned long *map, long start, long nr);
void bitmap_set(unsigned long *map, int i, int len);
void bitmap_clear(unsigned long *map, int start, int nr);
unsigned long bitmap_find_next_zero_area(unsigned long *map,
unsigned long size,
unsigned long start,
unsigned long nr,
unsigned long align_mask);
static inline unsigned long *bitmap_zero_extend(unsigned long *old,
long old_nbits, long new_nbits)
{
long new_len = BITS_TO_LONGS(new_nbits) * sizeof(unsigned long);
unsigned long *new = g_realloc(old, new_len);
bitmap_clear(new, old_nbits, new_nbits - old_nbits);
return new;
}
unsigned long size,
unsigned long start,
unsigned int nr,
unsigned long align_mask);
#endif /* BITMAP_H */

View File

@@ -11,7 +11,7 @@
* 2 of the License, or (at your option) any later version.
*/
#include "qemu/bitops.h"
#include "bitops.h"
#define BITOP_WORD(nr) ((nr) / BITS_PER_LONG)
@@ -42,23 +42,7 @@ unsigned long find_next_bit(const unsigned long *addr, unsigned long size,
size -= BITS_PER_LONG;
result += BITS_PER_LONG;
}
while (size >= 4*BITS_PER_LONG) {
unsigned long d1, d2, d3;
tmp = *p;
d1 = *(p+1);
d2 = *(p+2);
d3 = *(p+3);
if (tmp) {
goto found_middle;
}
if (d1 | d2 | d3) {
break;
}
p += 4;
result += 4*BITS_PER_LONG;
size -= 4*BITS_PER_LONG;
}
while (size >= BITS_PER_LONG) {
while (size & ~(BITS_PER_LONG-1)) {
if ((tmp = *(p++))) {
goto found_middle;
}
@@ -76,7 +60,7 @@ found_first:
return result + size; /* Nope. */
}
found_middle:
return result + ctzl(tmp);
return result + bitops_ffsl(tmp);
}
/*
@@ -125,7 +109,7 @@ found_first:
return result + size; /* Nope. */
}
found_middle:
return result + ctzl(~tmp);
return result + ffz(tmp);
}
unsigned long find_last_bit(const unsigned long *addr, unsigned long size)
@@ -149,7 +133,7 @@ unsigned long find_last_bit(const unsigned long *addr, unsigned long size)
tmp = addr[--words];
if (tmp) {
found:
return words * BITS_PER_LONG + BITS_PER_LONG - 1 - clzl(tmp);
return words * BITS_PER_LONG + bitops_flsl(tmp);
}
}

View File

@@ -13,7 +13,6 @@
#define BITOPS_H
#include "qemu-common.h"
#include "host-utils.h"
#define BITS_PER_BYTE CHAR_BIT
#define BITS_PER_LONG (sizeof (unsigned long) * BITS_PER_BYTE)
@@ -23,12 +22,99 @@
#define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
#define BITS_TO_LONGS(nr) DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
/**
* bitops_ffs - find first bit in word.
* @word: The word to search
*
* Undefined if no bit exists, so code should check against 0 first.
*/
static unsigned long bitops_ffsl(unsigned long word)
{
int num = 0;
#if LONG_MAX > 0x7FFFFFFF
if ((word & 0xffffffff) == 0) {
num += 32;
word >>= 32;
}
#endif
if ((word & 0xffff) == 0) {
num += 16;
word >>= 16;
}
if ((word & 0xff) == 0) {
num += 8;
word >>= 8;
}
if ((word & 0xf) == 0) {
num += 4;
word >>= 4;
}
if ((word & 0x3) == 0) {
num += 2;
word >>= 2;
}
if ((word & 0x1) == 0) {
num += 1;
}
return num;
}
/**
* bitops_fls - find last (most-significant) set bit in a long word
* @word: the word to search
*
* Undefined if no set bit exists, so code should check against 0 first.
*/
static inline unsigned long bitops_flsl(unsigned long word)
{
int num = BITS_PER_LONG - 1;
#if LONG_MAX > 0x7FFFFFFF
if (!(word & (~0ul << 32))) {
num -= 32;
word <<= 32;
}
#endif
if (!(word & (~0ul << (BITS_PER_LONG-16)))) {
num -= 16;
word <<= 16;
}
if (!(word & (~0ul << (BITS_PER_LONG-8)))) {
num -= 8;
word <<= 8;
}
if (!(word & (~0ul << (BITS_PER_LONG-4)))) {
num -= 4;
word <<= 4;
}
if (!(word & (~0ul << (BITS_PER_LONG-2)))) {
num -= 2;
word <<= 2;
}
if (!(word & (~0ul << (BITS_PER_LONG-1))))
num -= 1;
return num;
}
/**
* ffz - find first zero in word.
* @word: The word to search
*
* Undefined if no zero exists, so code should check against ~0UL first.
*/
static inline unsigned long ffz(unsigned long word)
{
return bitops_ffsl(~word);
}
/**
* set_bit - Set a bit in memory
* @nr: the bit to set
* @addr: the address to start counting from
*/
static inline void set_bit(long nr, unsigned long *addr)
static inline void set_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -41,7 +127,7 @@ static inline void set_bit(long nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to start counting from
*/
static inline void clear_bit(long nr, unsigned long *addr)
static inline void clear_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -54,7 +140,7 @@ static inline void clear_bit(long nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to start counting from
*/
static inline void change_bit(long nr, unsigned long *addr)
static inline void change_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -67,7 +153,7 @@ static inline void change_bit(long nr, unsigned long *addr)
* @nr: Bit to set
* @addr: Address to count from
*/
static inline int test_and_set_bit(long nr, unsigned long *addr)
static inline int test_and_set_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -82,7 +168,7 @@ static inline int test_and_set_bit(long nr, unsigned long *addr)
* @nr: Bit to clear
* @addr: Address to count from
*/
static inline int test_and_clear_bit(long nr, unsigned long *addr)
static inline int test_and_clear_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -97,7 +183,7 @@ static inline int test_and_clear_bit(long nr, unsigned long *addr)
* @nr: Bit to change
* @addr: Address to count from
*/
static inline int test_and_change_bit(long nr, unsigned long *addr)
static inline int test_and_change_bit(int nr, unsigned long *addr)
{
unsigned long mask = BIT_MASK(nr);
unsigned long *p = addr + BIT_WORD(nr);
@@ -112,7 +198,7 @@ static inline int test_and_change_bit(long nr, unsigned long *addr)
* @nr: bit number to test
* @addr: Address to start counting from
*/
static inline int test_bit(long nr, const unsigned long *addr)
static inline int test_bit(int nr, const unsigned long *addr)
{
return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_LONG-1)));
}
@@ -183,86 +269,6 @@ static inline unsigned long hweight_long(unsigned long w)
return count;
}
/**
* rol8 - rotate an 8-bit value left
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint8_t rol8(uint8_t word, unsigned int shift)
{
return (word << shift) | (word >> (8 - shift));
}
/**
* ror8 - rotate an 8-bit value right
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint8_t ror8(uint8_t word, unsigned int shift)
{
return (word >> shift) | (word << (8 - shift));
}
/**
* rol16 - rotate a 16-bit value left
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint16_t rol16(uint16_t word, unsigned int shift)
{
return (word << shift) | (word >> (16 - shift));
}
/**
* ror16 - rotate a 16-bit value right
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint16_t ror16(uint16_t word, unsigned int shift)
{
return (word >> shift) | (word << (16 - shift));
}
/**
* rol32 - rotate a 32-bit value left
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint32_t rol32(uint32_t word, unsigned int shift)
{
return (word << shift) | (word >> (32 - shift));
}
/**
* ror32 - rotate a 32-bit value right
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint32_t ror32(uint32_t word, unsigned int shift)
{
return (word >> shift) | (word << (32 - shift));
}
/**
* rol64 - rotate a 64-bit value left
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint64_t rol64(uint64_t word, unsigned int shift)
{
return (word << shift) | (word >> (64 - shift));
}
/**
* ror64 - rotate a 64-bit value right
* @word: value to rotate
* @shift: bits to roll
*/
static inline uint64_t ror64(uint64_t word, unsigned int shift)
{
return (word >> shift) | (word << (64 - shift));
}
/**
* extract32:
* @value: the value to extract the bit field from
@@ -301,56 +307,6 @@ static inline uint64_t extract64(uint64_t value, int start, int length)
return (value >> start) & (~0ULL >> (64 - length));
}
/**
* sextract32:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 32 bit input @value the bit field specified by the
* @start and @length parameters, and return it, sign extended to
* an int32_t (ie with the most significant bit of the field propagated
* to all the upper bits of the return value). The bit field must lie
* entirely within the 32 bit word. It is valid to request that
* all 32 bits are returned (ie @length 32 and @start 0).
*
* Returns: the sign extended value of the bit field extracted from the
* input value.
*/
static inline int32_t sextract32(uint32_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 32 - start);
/* Note that this implementation relies on right shift of signed
* integers being an arithmetic shift.
*/
return ((int32_t)(value << (32 - length - start))) >> (32 - length);
}
/**
* sextract64:
* @value: the value to extract the bit field from
* @start: the lowest bit in the bit field (numbered from 0)
* @length: the length of the bit field
*
* Extract from the 64 bit input @value the bit field specified by the
* @start and @length parameters, and return it, sign extended to
* an int64_t (ie with the most significant bit of the field propagated
* to all the upper bits of the return value). The bit field must lie
* entirely within the 64 bit word. It is valid to request that
* all 64 bits are returned (ie @length 64 and @start 0).
*
* Returns: the sign extended value of the bit field extracted from the
* input value.
*/
static inline uint64_t sextract64(uint64_t value, int start, int length)
{
assert(start >= 0 && length > 0 && length <= 64 - start);
/* Note that this implementation relies on right shift of signed
* integers being an arithmetic shift.
*/
return ((int64_t)(value << (64 - length - start))) >> (64 - length);
}
/**
* deposit32:
* @value: initial value to insert bit field into

View File

@@ -14,22 +14,20 @@
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "block_int.h"
#include "hw/hw.h"
#include "qemu/queue.h"
#include "qemu/timer.h"
#include "migration/block.h"
#include "migration/migration.h"
#include "sysemu/blockdev.h"
#include "qemu-queue.h"
#include "qemu-timer.h"
#include "block-migration.h"
#include "migration.h"
#include "blockdev.h"
#include <assert.h>
#define BLOCK_SIZE (1 << 20)
#define BDRV_SECTORS_PER_DIRTY_CHUNK (BLOCK_SIZE >> BDRV_SECTOR_BITS)
#define BLOCK_SIZE (BDRV_SECTORS_PER_DIRTY_CHUNK << BDRV_SECTOR_BITS)
#define BLK_MIG_FLAG_DEVICE_BLOCK 0x01
#define BLK_MIG_FLAG_EOS 0x02
#define BLK_MIG_FLAG_PROGRESS 0x04
#define BLK_MIG_FLAG_ZERO_BLOCK 0x08
#define MAX_IS_ALLOCATED_SEARCH 65536
@@ -44,25 +42,19 @@
#endif
typedef struct BlkMigDevState {
/* Written during setup phase. Can be read without a lock. */
BlockDriverState *bs;
int shared_base;
int64_t total_sectors;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
/* Only used by migration thread. Does not need a lock. */
int bulk_completed;
int shared_base;
int64_t cur_sector;
int64_t cur_dirty;
/* Protected by block migration lock. */
unsigned long *aio_bitmap;
int64_t completed_sectors;
BdrvDirtyBitmap *dirty_bitmap;
int64_t total_sectors;
int64_t dirty;
QSIMPLEQ_ENTRY(BlkMigDevState) entry;
unsigned long *aio_bitmap;
} BlkMigDevState;
typedef struct BlkMigBlock {
/* Only used by migration thread. */
uint8_t *buf;
BlkMigDevState *bmds;
int64_t sector;
@@ -70,77 +62,41 @@ typedef struct BlkMigBlock {
struct iovec iov;
QEMUIOVector qiov;
BlockDriverAIOCB *aiocb;
/* Protected by block migration lock. */
int ret;
QSIMPLEQ_ENTRY(BlkMigBlock) entry;
} BlkMigBlock;
typedef struct BlkMigState {
/* Written during setup phase. Can be read without a lock. */
int blk_enable;
int shared_base;
QSIMPLEQ_HEAD(bmds_list, BlkMigDevState) bmds_list;
int64_t total_sector_sum;
bool zero_blocks;
/* Protected by lock. */
QSIMPLEQ_HEAD(blk_list, BlkMigBlock) blk_list;
int submitted;
int read_done;
/* Only used by migration thread. Does not need a lock. */
int transferred;
int64_t total_sector_sum;
int prev_progress;
int bulk_completed;
/* Lock must be taken _inside_ the iothread lock. */
QemuMutex lock;
long double total_time;
long double prev_time_offset;
int reads;
} BlkMigState;
static BlkMigState block_mig_state;
static void blk_mig_lock(void)
{
qemu_mutex_lock(&block_mig_state.lock);
}
static void blk_mig_unlock(void)
{
qemu_mutex_unlock(&block_mig_state.lock);
}
/* Must run outside of the iothread lock during the bulk phase,
* or the VM will stall.
*/
static void blk_send(QEMUFile *f, BlkMigBlock * blk)
{
int len;
uint64_t flags = BLK_MIG_FLAG_DEVICE_BLOCK;
if (block_mig_state.zero_blocks &&
buffer_is_zero(blk->buf, BLOCK_SIZE)) {
flags |= BLK_MIG_FLAG_ZERO_BLOCK;
}
/* sector number and flags */
qemu_put_be64(f, (blk->sector << BDRV_SECTOR_BITS)
| flags);
| BLK_MIG_FLAG_DEVICE_BLOCK);
/* device name */
len = strlen(blk->bmds->bs->device_name);
qemu_put_byte(f, len);
qemu_put_buffer(f, (uint8_t *)blk->bmds->bs->device_name, len);
/* if a block is zero we need to flush here since the network
* bandwidth is now a lot higher than the storage device bandwidth.
* thus if we queue zero blocks we slow down the migration */
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
qemu_fflush(f);
return;
}
qemu_put_buffer(f, blk->buf, BLOCK_SIZE);
}
@@ -154,11 +110,9 @@ uint64_t blk_mig_bytes_transferred(void)
BlkMigDevState *bmds;
uint64_t sum = 0;
blk_mig_lock();
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
sum += bmds->completed_sectors;
}
blk_mig_unlock();
return sum << BDRV_SECTOR_BITS;
}
@@ -178,8 +132,11 @@ uint64_t blk_mig_bytes_total(void)
return sum << BDRV_SECTOR_BITS;
}
/* Called with migration lock held. */
static inline long double compute_read_bwidth(void)
{
assert(block_mig_state.total_time != 0);
return (block_mig_state.reads / block_mig_state.total_time) * BLOCK_SIZE;
}
static int bmds_aio_inflight(BlkMigDevState *bmds, int64_t sector)
{
@@ -193,8 +150,6 @@ static int bmds_aio_inflight(BlkMigDevState *bmds, int64_t sector)
}
}
/* Called with migration lock held. */
static void bmds_set_aio_inflight(BlkMigDevState *bmds, int64_t sector_num,
int nb_sectors, int set)
{
@@ -229,26 +184,25 @@ static void alloc_aio_bitmap(BlkMigDevState *bmds)
bmds->aio_bitmap = g_malloc0(bitmap_size);
}
/* Never hold migration lock when yielding to the main loop! */
static void blk_mig_read_cb(void *opaque, int ret)
{
long double curr_time = qemu_get_clock_ns(rt_clock);
BlkMigBlock *blk = opaque;
blk_mig_lock();
blk->ret = ret;
block_mig_state.reads++;
block_mig_state.total_time += (curr_time - block_mig_state.prev_time_offset);
block_mig_state.prev_time_offset = curr_time;
QSIMPLEQ_INSERT_TAIL(&block_mig_state.blk_list, blk, entry);
bmds_set_aio_inflight(blk->bmds, blk->sector, blk->nr_sectors, 0);
block_mig_state.submitted--;
block_mig_state.read_done++;
assert(block_mig_state.submitted >= 0);
blk_mig_unlock();
}
/* Called with no lock taken. */
static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
{
int64_t total_sectors = bmds->total_sectors;
@@ -258,13 +212,11 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
int nr_sectors;
if (bmds->shared_base) {
qemu_mutex_lock_iothread();
while (cur_sector < total_sectors &&
!bdrv_is_allocated(bs, cur_sector, MAX_IS_ALLOCATED_SEARCH,
&nr_sectors)) {
cur_sector += nr_sectors;
}
qemu_mutex_unlock_iothread();
}
if (cur_sector >= total_sectors) {
@@ -293,53 +245,26 @@ static int mig_save_device_bulk(QEMUFile *f, BlkMigDevState *bmds)
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
blk_mig_lock();
block_mig_state.submitted++;
blk_mig_unlock();
if (block_mig_state.submitted == 0) {
block_mig_state.prev_time_offset = qemu_get_clock_ns(rt_clock);
}
qemu_mutex_lock_iothread();
blk->aiocb = bdrv_aio_readv(bs, cur_sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
block_mig_state.submitted++;
bdrv_reset_dirty(bs, cur_sector, nr_sectors);
qemu_mutex_unlock_iothread();
bmds->cur_sector = cur_sector + nr_sectors;
return (bmds->cur_sector >= total_sectors);
}
/* Called with iothread lock taken. */
static int set_dirty_tracking(void)
{
BlkMigDevState *bmds;
int ret;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bmds->dirty_bitmap = bdrv_create_dirty_bitmap(bmds->bs, BLOCK_SIZE,
NULL);
if (!bmds->dirty_bitmap) {
ret = -errno;
goto fail;
}
}
return 0;
fail:
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
if (bmds->dirty_bitmap) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
}
}
return ret;
}
static void unset_dirty_tracking(void)
static void set_dirty_tracking(int enable)
{
BlkMigDevState *bmds;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
bdrv_release_dirty_bitmap(bmds->bs, bmds->dirty_bitmap);
bdrv_set_dirty_tracking(bmds->bs, enable);
}
}
@@ -361,8 +286,8 @@ static void init_blk_migration_it(void *opaque, BlockDriverState *bs)
bmds->completed_sectors = 0;
bmds->shared_base = block_mig_state.shared_base;
alloc_aio_bitmap(bmds);
drive_get_ref(drive_get_by_blockdev(bs));
bdrv_set_in_use(bs, 1);
bdrv_ref(bs);
block_mig_state.total_sector_sum += sectors;
@@ -385,13 +310,12 @@ static void init_blk_migration(QEMUFile *f)
block_mig_state.total_sector_sum = 0;
block_mig_state.prev_progress = -1;
block_mig_state.bulk_completed = 0;
block_mig_state.zero_blocks = migrate_zero_blocks();
block_mig_state.total_time = 0;
block_mig_state.reads = 0;
bdrv_iterate(init_blk_migration_it, NULL);
}
/* Called with no lock taken. */
static int blk_mig_save_bulked_block(QEMUFile *f)
{
int64_t completed_sector_sum = 0;
@@ -438,8 +362,6 @@ static void blk_mig_reset_dirty_cursor(void)
}
}
/* Called with iothread lock taken. */
static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int is_async)
{
@@ -450,14 +372,10 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
int ret = -EIO;
for (sector = bmds->cur_dirty; sector < bmds->total_sectors;) {
blk_mig_lock();
if (bmds_aio_inflight(bmds, sector)) {
blk_mig_unlock();
bdrv_drain_all();
} else {
blk_mig_unlock();
}
if (bdrv_get_dirty(bmds->bs, bmds->dirty_bitmap, sector)) {
if (bdrv_get_dirty(bmds->bs, sector)) {
if (total_sectors - sector < BDRV_SECTORS_PER_DIRTY_CHUNK) {
nr_sectors = total_sectors - sector;
@@ -475,13 +393,14 @@ static int mig_save_device_dirty(QEMUFile *f, BlkMigDevState *bmds,
blk->iov.iov_len = nr_sectors * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&blk->qiov, &blk->iov, 1);
if (block_mig_state.submitted == 0) {
block_mig_state.prev_time_offset = qemu_get_clock_ns(rt_clock);
}
blk->aiocb = bdrv_aio_readv(bmds->bs, sector, &blk->qiov,
nr_sectors, blk_mig_read_cb, blk);
blk_mig_lock();
block_mig_state.submitted++;
bmds_set_aio_inflight(bmds, sector, nr_sectors, 1);
blk_mig_unlock();
} else {
ret = bdrv_read(bmds->bs, sector, blk->buf, nr_sectors);
if (ret < 0) {
@@ -509,9 +428,7 @@ error:
return ret;
}
/* Called with iothread lock taken.
*
* return value:
/* return value:
* 0: too much data for max_downtime
* 1: few enough data for max_downtime
*/
@@ -530,8 +447,6 @@ static int blk_mig_save_dirty_block(QEMUFile *f, int is_async)
return ret;
}
/* Called with no locks taken. */
static int flush_blks(QEMUFile *f)
{
BlkMigBlock *blk;
@@ -541,7 +456,6 @@ static int flush_blks(QEMUFile *f)
__FUNCTION__, block_mig_state.submitted, block_mig_state.read_done,
block_mig_state.transferred);
blk_mig_lock();
while ((blk = QSIMPLEQ_FIRST(&block_mig_state.blk_list)) != NULL) {
if (qemu_file_rate_limit(f)) {
break;
@@ -550,12 +464,9 @@ static int flush_blks(QEMUFile *f)
ret = blk->ret;
break;
}
blk_send(f, blk);
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.blk_list, entry);
blk_mig_unlock();
blk_send(f, blk);
blk_mig_lock();
g_free(blk->buf);
g_free(blk);
@@ -563,7 +474,6 @@ static int flush_blks(QEMUFile *f)
block_mig_state.transferred++;
assert(block_mig_state.read_done >= 0);
}
blk_mig_unlock();
DPRINTF("%s Exit submitted %d read_done %d transferred %d\n", __FUNCTION__,
block_mig_state.submitted, block_mig_state.read_done,
@@ -571,21 +481,43 @@ static int flush_blks(QEMUFile *f)
return ret;
}
/* Called with iothread lock taken. */
static int64_t get_remaining_dirty(void)
{
BlkMigDevState *bmds;
int64_t dirty = 0;
QSIMPLEQ_FOREACH(bmds, &block_mig_state.bmds_list, entry) {
dirty += bdrv_get_dirty_count(bmds->bs, bmds->dirty_bitmap);
dirty += bdrv_get_dirty_count(bmds->bs);
}
return dirty << BDRV_SECTOR_BITS;
return dirty * BLOCK_SIZE;
}
/* Called with iothread lock taken. */
static int is_stage2_completed(void)
{
int64_t remaining_dirty;
long double bwidth;
if (block_mig_state.bulk_completed == 1) {
remaining_dirty = get_remaining_dirty();
if (remaining_dirty == 0) {
return 1;
}
bwidth = compute_read_bwidth();
if ((remaining_dirty / bwidth) <=
migrate_max_downtime()) {
/* finish stage2 because we think that we can finish remaining work
below max_downtime */
return 1;
}
}
return 0;
}
static void blk_mig_cleanup(void)
{
@@ -594,13 +526,12 @@ static void blk_mig_cleanup(void)
bdrv_drain_all();
unset_dirty_tracking();
set_dirty_tracking(0);
blk_mig_lock();
while ((bmds = QSIMPLEQ_FIRST(&block_mig_state.bmds_list)) != NULL) {
QSIMPLEQ_REMOVE_HEAD(&block_mig_state.bmds_list, entry);
bdrv_set_in_use(bmds->bs, 0);
bdrv_unref(bmds->bs);
drive_put_ref(drive_get_by_blockdev(bmds->bs));
g_free(bmds->aio_bitmap);
g_free(bmds);
}
@@ -610,7 +541,6 @@ static void blk_mig_cleanup(void)
g_free(blk->buf);
g_free(blk);
}
blk_mig_unlock();
}
static void block_migration_cancel(void *opaque)
@@ -625,84 +555,72 @@ static int block_save_setup(QEMUFile *f, void *opaque)
DPRINTF("Enter save live setup submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
qemu_mutex_lock_iothread();
init_blk_migration(f);
/* start track dirty blocks */
ret = set_dirty_tracking();
set_dirty_tracking(1);
ret = flush_blks(f);
if (ret) {
qemu_mutex_unlock_iothread();
blk_mig_cleanup();
return ret;
}
init_blk_migration(f);
qemu_mutex_unlock_iothread();
ret = flush_blks(f);
blk_mig_reset_dirty_cursor();
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return ret;
return 0;
}
static int block_save_iterate(QEMUFile *f, void *opaque)
{
int ret;
int64_t last_ftell = qemu_ftell(f);
DPRINTF("Enter save live iterate submitted %d transferred %d\n",
block_mig_state.submitted, block_mig_state.transferred);
ret = flush_blks(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
blk_mig_reset_dirty_cursor();
/* control the rate of transfer */
blk_mig_lock();
while ((block_mig_state.submitted +
block_mig_state.read_done) * BLOCK_SIZE <
qemu_file_get_rate_limit(f)) {
blk_mig_unlock();
if (block_mig_state.bulk_completed == 0) {
/* first finish the bulk phase */
if (blk_mig_save_bulked_block(f) == 0) {
/* finished saving bulk on all devices */
block_mig_state.bulk_completed = 1;
}
ret = 0;
} else {
/* Always called with iothread lock taken for
* simplicity, block_save_complete also calls it.
*/
qemu_mutex_lock_iothread();
ret = blk_mig_save_dirty_block(f, 1);
qemu_mutex_unlock_iothread();
}
if (ret < 0) {
return ret;
}
blk_mig_lock();
if (ret != 0) {
/* no more dirty blocks */
break;
if (ret != 0) {
/* no more dirty blocks */
break;
}
}
}
blk_mig_unlock();
if (ret) {
blk_mig_cleanup();
return ret;
}
ret = flush_blks(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
return qemu_ftell(f) - last_ftell;
}
/* Called with iothread lock taken. */
return is_stage2_completed();
}
static int block_save_complete(QEMUFile *f, void *opaque)
{
@@ -713,6 +631,7 @@ static int block_save_complete(QEMUFile *f, void *opaque)
ret = flush_blks(f);
if (ret) {
blk_mig_cleanup();
return ret;
}
@@ -720,17 +639,16 @@ static int block_save_complete(QEMUFile *f, void *opaque)
/* we know for sure that save bulk is completed and
all async read completed */
blk_mig_lock();
assert(block_mig_state.submitted == 0);
blk_mig_unlock();
do {
ret = blk_mig_save_dirty_block(f, 0);
if (ret < 0) {
return ret;
}
} while (ret == 0);
blk_mig_cleanup();
if (ret) {
return ret;
}
/* report completion */
qemu_put_be64(f, (100 << BDRV_SECTOR_BITS) | BLK_MIG_FLAG_PROGRESS);
@@ -738,32 +656,9 @@ static int block_save_complete(QEMUFile *f, void *opaque)
qemu_put_be64(f, BLK_MIG_FLAG_EOS);
blk_mig_cleanup();
return 0;
}
static uint64_t block_save_pending(QEMUFile *f, void *opaque, uint64_t max_size)
{
/* Estimate pending number of bytes to send */
uint64_t pending;
qemu_mutex_lock_iothread();
blk_mig_lock();
pending = get_remaining_dirty() +
block_mig_state.submitted * BLOCK_SIZE +
block_mig_state.read_done * BLOCK_SIZE;
/* Report at least one block pending during bulk phase */
if (pending == 0 && !block_mig_state.bulk_completed) {
pending = BLOCK_SIZE;
}
blk_mig_unlock();
qemu_mutex_unlock_iothread();
DPRINTF("Enter save live pending %" PRIu64 "\n", pending);
return pending;
}
static int block_load(QEMUFile *f, void *opaque, int version_id)
{
static int banner_printed;
@@ -811,16 +706,12 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
nr_sectors = BDRV_SECTORS_PER_DIRTY_CHUNK;
}
if (flags & BLK_MIG_FLAG_ZERO_BLOCK) {
ret = bdrv_write_zeroes(bs, addr, nr_sectors,
BDRV_REQ_MAY_UNMAP);
} else {
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
}
buf = g_malloc(BLOCK_SIZE);
qemu_get_buffer(f, buf, BLOCK_SIZE);
ret = bdrv_write(bs, addr, buf, nr_sectors);
g_free(buf);
if (ret < 0) {
return ret;
}
@@ -833,7 +724,7 @@ static int block_load(QEMUFile *f, void *opaque, int version_id)
(addr == 100) ? '\n' : '\r');
fflush(stdout);
} else if (!(flags & BLK_MIG_FLAG_EOS)) {
fprintf(stderr, "Unknown block migration flags: %#x\n", flags);
fprintf(stderr, "Unknown flags\n");
return -EINVAL;
}
ret = qemu_file_get_error(f);
@@ -864,7 +755,6 @@ SaveVMHandlers savevm_block_handlers = {
.save_live_setup = block_save_setup,
.save_live_iterate = block_save_iterate,
.save_live_complete = block_save_complete,
.save_live_pending = block_save_pending,
.load_state = block_load,
.cancel = block_migration_cancel,
.is_active = block_is_active,
@@ -874,7 +764,6 @@ void blk_mig_init(void)
{
QSIMPLEQ_INIT(&block_mig_state.bmds_list);
QSIMPLEQ_INIT(&block_mig_state.blk_list);
qemu_mutex_init(&block_mig_state.lock);
register_savevm_live(NULL, "block", 0, 1, &savevm_block_handlers,
&block_mig_state);

3494
block.c

File diff suppressed because it is too large Load Diff

View File

@@ -1,11 +1,11 @@
#ifndef BLOCK_H
#define BLOCK_H
#include "block/aio.h"
#include "qemu-aio.h"
#include "qemu-common.h"
#include "qemu/option.h"
#include "block/coroutine.h"
#include "qapi/qmp/qobject.h"
#include "qemu-option.h"
#include "qemu-coroutine.h"
#include "qobject.h"
#include "qapi-types.h"
/* block.c */
@@ -18,35 +18,25 @@ typedef struct BlockDriverInfo {
/* offset at which the VM state can be saved (0 if not possible) */
int64_t vm_state_offset;
bool is_dirty;
/*
* True if unallocated blocks read back as zeroes. This is equivalent
* to the the LBPRZ flag in the SCSI logical block provisioning page.
*/
bool unallocated_blocks_are_zero;
/*
* True if the driver can optimize writing zeroes by unmapping
* sectors. This is equivalent to the BLKDISCARDZEROES ioctl in Linux
* with the difference that in qemu a discard is allowed to silently
* fail. Therefore we have to use bdrv_write_zeroes with the
* BDRV_REQ_MAY_UNMAP flag for an optimized zero write with unmapping.
* After this call the driver has to guarantee that the contents read
* back as zero. It is additionally required that the block device is
* opened with BDRV_O_UNMAP flag for this to work.
*/
bool can_write_zeroes_with_unmap;
/*
* True if this block driver only supports compressed writes
*/
bool needs_compressed_writes;
} BlockDriverInfo;
typedef struct BlockFragInfo {
uint64_t allocated_clusters;
uint64_t total_clusters;
uint64_t fragmented_clusters;
uint64_t compressed_clusters;
} BlockFragInfo;
typedef struct QEMUSnapshotInfo {
char id_str[128]; /* unique snapshot id */
/* the following fields are informative. They are not needed for
the consistency of the snapshot */
char name[256]; /* user chosen name */
uint64_t vm_state_size; /* VM state info size */
uint32_t date_sec; /* UTC date of the snapshot */
uint32_t date_nsec;
uint64_t vm_clock_nsec; /* VM clock relative to boot */
} QEMUSnapshotInfo;
/* Callbacks for block device models */
typedef struct BlockDevOps {
/*
@@ -82,21 +72,8 @@ typedef struct BlockDevOps {
void (*resize_cb)(void *opaque);
} BlockDevOps;
typedef enum {
BDRV_REQ_COPY_ON_READ = 0x1,
BDRV_REQ_ZERO_WRITE = 0x2,
/* The BDRV_REQ_MAY_UNMAP flag is used to indicate that the block driver
* is allowed to optimize a write zeroes request by unmapping (discarding)
* blocks if it is guaranteed that the result will read back as
* zeroes. The flag is only passed to the driver if the block device is
* opened with BDRV_O_UNMAP.
*/
BDRV_REQ_MAY_UNMAP = 0x4,
} BdrvRequestFlags;
#define BDRV_O_RDWR 0x0002
#define BDRV_O_SNAPSHOT 0x0008 /* open the file read only and save writes in a snapshot */
#define BDRV_O_TEMPORARY 0x0010 /* delete the file after use */
#define BDRV_O_NOCACHE 0x0020 /* do not use the host page cache */
#define BDRV_O_CACHE_WB 0x0040 /* use write-back caching */
#define BDRV_O_NATIVE_AIO 0x0080 /* use native AIO instead of the thread pool */
@@ -106,10 +83,6 @@ typedef enum {
#define BDRV_O_INCOMING 0x0800 /* consistency hint for incoming migration */
#define BDRV_O_CHECK 0x1000 /* open solely for consistency check */
#define BDRV_O_ALLOW_RDWR 0x2000 /* allow reopen to change from r/o to r/w */
#define BDRV_O_UNMAP 0x4000 /* execute guest UNMAP/TRIM operations */
#define BDRV_O_PROTOCOL 0x8000 /* if no block driver is explicitly given:
select an appropriate protocol driver,
ignoring the format layer */
#define BDRV_O_CACHE_MASK (BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NO_FLUSH)
@@ -117,36 +90,6 @@ typedef enum {
#define BDRV_SECTOR_SIZE (1ULL << BDRV_SECTOR_BITS)
#define BDRV_SECTOR_MASK ~(BDRV_SECTOR_SIZE - 1)
/* BDRV_BLOCK_DATA: data is read from bs->file or another file
* BDRV_BLOCK_ZERO: sectors read as zero
* BDRV_BLOCK_OFFSET_VALID: sector stored in bs->file as raw data
* BDRV_BLOCK_RAW: used internally to indicate that the request
* was answered by the raw driver and that one
* should look in bs->file directly.
*
* If BDRV_BLOCK_OFFSET_VALID is set, bits 9-62 represent the offset in
* bs->file where sector data can be read from as raw data.
*
* DATA == 0 && ZERO == 0 means that data is read from backing_hd if present.
*
* DATA ZERO OFFSET_VALID
* t t t sectors read as zero, bs->file is zero at offset
* t f t sectors read as valid from bs->file at offset
* f t t sectors preallocated, read as zero, bs->file not
* necessarily zero at offset
* f f t sectors preallocated but read from backing_hd,
* bs->file contains garbage at offset
* t t f sectors preallocated, read as zero, unknown offset
* t f f sectors read from unknown file or offset
* f t f not allocated or unknown offset, read as zero
* f f f not allocated or unknown offset, read from backing_hd
*/
#define BDRV_BLOCK_DATA 1
#define BDRV_BLOCK_ZERO 2
#define BDRV_BLOCK_OFFSET_VALID 4
#define BDRV_BLOCK_RAW 8
#define BDRV_BLOCK_OFFSET_MASK BDRV_SECTOR_MASK
typedef enum {
BDRV_ACTION_REPORT, BDRV_ACTION_IGNORE, BDRV_ACTION_STOP
} BlockErrorAction;
@@ -173,32 +116,26 @@ void bdrv_info_stats(Monitor *mon, QObject **ret_data);
/* disk I/O throttling */
void bdrv_io_limits_enable(BlockDriverState *bs);
void bdrv_io_limits_disable(BlockDriverState *bs);
bool bdrv_io_limits_enabled(BlockDriverState *bs);
void bdrv_init(void);
void bdrv_init_with_whitelist(void);
BlockDriver *bdrv_find_protocol(const char *filename,
bool allow_protocol_prefix);
BlockDriver *bdrv_find_protocol(const char *filename);
BlockDriver *bdrv_find_format(const char *format_name);
BlockDriver *bdrv_find_whitelisted_format(const char *format_name,
bool readonly);
BlockDriver *bdrv_find_whitelisted_format(const char *format_name);
int bdrv_create(BlockDriver *drv, const char* filename,
QEMUOptionParameter *options, Error **errp);
int bdrv_create_file(const char* filename, QEMUOptionParameter *options,
Error **errp);
BlockDriverState *bdrv_new(const char *device_name, Error **errp);
QEMUOptionParameter *options);
int bdrv_create_file(const char* filename, QEMUOptionParameter *options);
BlockDriverState *bdrv_new(const char *device_name);
void bdrv_make_anon(BlockDriverState *bs);
void bdrv_swap(BlockDriverState *bs_new, BlockDriverState *bs_old);
void bdrv_append(BlockDriverState *bs_new, BlockDriverState *bs_top);
void bdrv_delete(BlockDriverState *bs);
int bdrv_parse_cache_flags(const char *mode, int *flags);
int bdrv_parse_discard_flags(const char *mode, int *flags);
int bdrv_open_image(BlockDriverState **pbs, const char *filename,
QDict *options, const char *bdref_key, int flags,
bool allow_none, Error **errp);
int bdrv_open_backing_file(BlockDriverState *bs, QDict *options, Error **errp);
void bdrv_append_temp_snapshot(BlockDriverState *bs, int flags, Error **errp);
int bdrv_open(BlockDriverState **pbs, const char *filename,
const char *reference, QDict *options, int flags,
BlockDriver *drv, Error **errp);
int bdrv_file_open(BlockDriverState **pbs, const char *filename, int flags);
int bdrv_open_backing_file(BlockDriverState *bs);
int bdrv_open(BlockDriverState *bs, const char *filename, int flags,
BlockDriver *drv);
BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue,
BlockDriverState *bs, int flags);
int bdrv_reopen_multiple(BlockReopenQueue *bs_queue, Error **errp);
@@ -225,17 +162,10 @@ int bdrv_read_unthrottled(BlockDriverState *bs, int64_t sector_num,
uint8_t *buf, int nb_sectors);
int bdrv_write(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags);
BlockDriverAIOCB *bdrv_aio_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags,
BlockDriverCompletionFunc *cb, void *opaque);
int bdrv_make_zero(BlockDriverState *bs, BdrvRequestFlags flags);
int bdrv_pread(BlockDriverState *bs, int64_t offset,
void *buf, int count);
int bdrv_pwrite(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int bdrv_pwritev(BlockDriverState *bs, int64_t offset, QEMUIOVector *qiov);
int bdrv_pwrite_sync(BlockDriverState *bs, int64_t offset,
const void *buf, int count);
int coroutine_fn bdrv_co_readv(BlockDriverState *bs, int64_t sector_num,
@@ -251,7 +181,13 @@ int coroutine_fn bdrv_co_writev(BlockDriverState *bs, int64_t sector_num,
* because it may allocate memory for the entire region.
*/
int coroutine_fn bdrv_co_write_zeroes(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, BdrvRequestFlags flags);
int nb_sectors);
int coroutine_fn bdrv_co_is_allocated(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int coroutine_fn bdrv_co_is_allocated_above(BlockDriverState *top,
BlockDriverState *base,
int64_t sector_num,
int nb_sectors, int *pnum);
BlockDriverState *bdrv_find_backing_image(BlockDriverState *bs,
const char *backing_file);
int bdrv_get_backing_file_depth(BlockDriverState *bs);
@@ -259,7 +195,6 @@ int bdrv_truncate(BlockDriverState *bs, int64_t offset);
int64_t bdrv_getlength(BlockDriverState *bs);
int64_t bdrv_get_allocated_file_size(BlockDriverState *bs);
void bdrv_get_geometry(BlockDriverState *bs, uint64_t *nb_sectors_ptr);
int bdrv_refresh_limits(BlockDriverState *bs);
int bdrv_commit(BlockDriverState *bs);
int bdrv_commit_all(void);
int bdrv_change_backing_file(BlockDriverState *bs,
@@ -278,7 +213,6 @@ typedef struct BdrvCheckResult {
int check_errors;
int corruptions_fixed;
int leaks_fixed;
int64_t image_end_offset;
BlockFragInfo bfi;
} BdrvCheckResult;
@@ -289,13 +223,6 @@ typedef enum {
int bdrv_check(BlockDriverState *bs, BdrvCheckResult *res, BdrvCheckMode fix);
int bdrv_amend_options(BlockDriverState *bs_new, QEMUOptionParameter *options);
/* external snapshots */
bool bdrv_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate);
bool bdrv_is_first_non_filter(BlockDriverState *candidate);
/* async block I/O */
typedef void BlockDriverDirtyHandler(BlockDriverState *bs, int64_t sector,
int sector_num);
@@ -316,7 +243,6 @@ typedef struct BlockRequest {
/* Fields to be filled by multiwrite caller */
int64_t sector;
int nb_sectors;
int flags;
QEMUIOVector *qiov;
BlockDriverCompletionFunc *cb;
void *opaque;
@@ -335,30 +261,23 @@ BlockDriverAIOCB *bdrv_aio_ioctl(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque);
/* Invalidate any cached metadata used by image formats */
void bdrv_invalidate_cache(BlockDriverState *bs, Error **errp);
void bdrv_invalidate_cache_all(Error **errp);
void bdrv_invalidate_cache(BlockDriverState *bs);
void bdrv_invalidate_cache_all(void);
void bdrv_clear_incoming_migration_all(void);
/* Ensure contents are flushed to disk. */
int bdrv_flush(BlockDriverState *bs);
int coroutine_fn bdrv_co_flush(BlockDriverState *bs);
int bdrv_flush_all(void);
void bdrv_flush_all(void);
void bdrv_close_all(void);
void bdrv_drain_all(void);
int bdrv_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_co_discard(BlockDriverState *bs, int64_t sector_num, int nb_sectors);
int bdrv_has_zero_init_1(BlockDriverState *bs);
int bdrv_has_zero_init(BlockDriverState *bs);
bool bdrv_unallocated_blocks_are_zero(BlockDriverState *bs);
bool bdrv_can_write_zeroes_with_unmap(BlockDriverState *bs);
int64_t bdrv_get_block_status(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, int *pnum);
int bdrv_is_allocated(BlockDriverState *bs, int64_t sector_num, int nb_sectors,
int *pnum);
int bdrv_is_allocated_above(BlockDriverState *top, BlockDriverState *base,
int64_t sector_num, int nb_sectors, int *pnum);
void bdrv_set_on_error(BlockDriverState *bs, BlockdevOnError on_read_error,
BlockdevOnError on_write_error);
@@ -376,11 +295,6 @@ void bdrv_lock_medium(BlockDriverState *bs, bool locked);
void bdrv_eject(BlockDriverState *bs, bool eject_flag);
const char *bdrv_get_format_name(BlockDriverState *bs);
BlockDriverState *bdrv_find(const char *name);
BlockDriverState *bdrv_find_node(const char *node_name);
BlockDeviceInfoList *bdrv_named_nodes_list(void);
BlockDriverState *bdrv_lookup_bs(const char *device,
const char *node_name,
Error **errp);
BlockDriverState *bdrv_next(BlockDriverState *bs);
void bdrv_iterate(void (*it)(void *opaque, BlockDriverState *bs),
void *opaque);
@@ -395,73 +309,62 @@ int bdrv_get_flags(BlockDriverState *bs);
int bdrv_write_compressed(BlockDriverState *bs, int64_t sector_num,
const uint8_t *buf, int nb_sectors);
int bdrv_get_info(BlockDriverState *bs, BlockDriverInfo *bdi);
ImageInfoSpecific *bdrv_get_specific_info(BlockDriverState *bs);
void bdrv_round_to_clusters(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
int64_t *cluster_sector_num,
int *cluster_nb_sectors);
const char *bdrv_get_encrypted_filename(BlockDriverState *bs);
void bdrv_get_backing_filename(BlockDriverState *bs,
char *filename, int filename_size);
void bdrv_get_full_backing_filename(BlockDriverState *bs,
char *dest, size_t sz);
BlockInfo *bdrv_query_info(BlockDriverState *s);
BlockStats *bdrv_query_stats(const BlockDriverState *bs);
int bdrv_can_snapshot(BlockDriverState *bs);
int bdrv_is_snapshot(BlockDriverState *bs);
BlockDriverState *bdrv_snapshots(void);
int bdrv_snapshot_create(BlockDriverState *bs,
QEMUSnapshotInfo *sn_info);
int bdrv_snapshot_goto(BlockDriverState *bs,
const char *snapshot_id);
int bdrv_snapshot_delete(BlockDriverState *bs, const char *snapshot_id);
int bdrv_snapshot_list(BlockDriverState *bs,
QEMUSnapshotInfo **psn_info);
int bdrv_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_name);
char *bdrv_snapshot_dump(char *buf, int buf_size, QEMUSnapshotInfo *sn);
char *get_human_readable_size(char *buf, int buf_size, int64_t size);
int path_is_absolute(const char *path);
void path_combine(char *dest, int dest_size,
const char *base_path,
const char *filename);
int bdrv_writev_vmstate(BlockDriverState *bs, QEMUIOVector *qiov, int64_t pos);
int bdrv_save_vmstate(BlockDriverState *bs, const uint8_t *buf,
int64_t pos, int size);
int bdrv_load_vmstate(BlockDriverState *bs, uint8_t *buf,
int64_t pos, int size);
void bdrv_img_create(const char *filename, const char *fmt,
const char *base_filename, const char *base_fmt,
char *options, uint64_t img_size, int flags,
Error **errp, bool quiet);
int bdrv_img_create(const char *filename, const char *fmt,
const char *base_filename, const char *base_fmt,
char *options, uint64_t img_size, int flags);
/* Returns the alignment in bytes that is required so that no bounce buffer
* is required throughout the stack */
size_t bdrv_opt_mem_align(BlockDriverState *bs);
void bdrv_set_guest_block_size(BlockDriverState *bs, int align);
void bdrv_set_buffer_alignment(BlockDriverState *bs, int align);
void *qemu_blockalign(BlockDriverState *bs, size_t size);
bool bdrv_qiov_is_aligned(BlockDriverState *bs, QEMUIOVector *qiov);
struct HBitmapIter;
typedef struct BdrvDirtyBitmap BdrvDirtyBitmap;
BdrvDirtyBitmap *bdrv_create_dirty_bitmap(BlockDriverState *bs, int granularity,
Error **errp);
void bdrv_release_dirty_bitmap(BlockDriverState *bs, BdrvDirtyBitmap *bitmap);
BlockDirtyInfoList *bdrv_query_dirty_bitmaps(BlockDriverState *bs);
int bdrv_get_dirty(BlockDriverState *bs, BdrvDirtyBitmap *bitmap, int64_t sector);
#define BDRV_SECTORS_PER_DIRTY_CHUNK 2048
void bdrv_set_dirty_tracking(BlockDriverState *bs, int enable);
int bdrv_get_dirty(BlockDriverState *bs, int64_t sector);
void bdrv_set_dirty(BlockDriverState *bs, int64_t cur_sector, int nr_sectors);
void bdrv_reset_dirty(BlockDriverState *bs, int64_t cur_sector, int nr_sectors);
void bdrv_dirty_iter_init(BlockDriverState *bs,
BdrvDirtyBitmap *bitmap, struct HBitmapIter *hbi);
int64_t bdrv_get_dirty_count(BlockDriverState *bs, BdrvDirtyBitmap *bitmap);
int64_t bdrv_get_next_dirty(BlockDriverState *bs, int64_t sector);
int64_t bdrv_get_dirty_count(BlockDriverState *bs);
void bdrv_enable_copy_on_read(BlockDriverState *bs);
void bdrv_disable_copy_on_read(BlockDriverState *bs);
void bdrv_ref(BlockDriverState *bs);
void bdrv_unref(BlockDriverState *bs);
void bdrv_set_in_use(BlockDriverState *bs, int in_use);
int bdrv_in_use(BlockDriverState *bs);
#ifdef CONFIG_LINUX_AIO
int raw_get_aio_fd(BlockDriverState *bs);
#else
static inline int raw_get_aio_fd(BlockDriverState *bs)
{
return -ENOTSUP;
}
#endif
enum BlockAcctType {
BDRV_ACCT_READ,
BDRV_ACCT_WRITE,
@@ -507,7 +410,6 @@ typedef enum {
BLKDBG_REFTABLE_LOAD,
BLKDBG_REFTABLE_GROW,
BLKDBG_REFTABLE_UPDATE,
BLKDBG_REFBLOCK_LOAD,
BLKDBG_REFBLOCK_UPDATE,
@@ -523,27 +425,10 @@ typedef enum {
BLKDBG_CLUSTER_ALLOC_BYTES,
BLKDBG_CLUSTER_FREE,
BLKDBG_FLUSH_TO_OS,
BLKDBG_FLUSH_TO_DISK,
BLKDBG_PWRITEV_RMW_HEAD,
BLKDBG_PWRITEV_RMW_AFTER_HEAD,
BLKDBG_PWRITEV_RMW_TAIL,
BLKDBG_PWRITEV_RMW_AFTER_TAIL,
BLKDBG_PWRITEV,
BLKDBG_PWRITEV_ZERO,
BLKDBG_PWRITEV_DONE,
BLKDBG_EVENT_MAX,
} BlkDebugEvent;
#define BLKDBG_EVENT(bs, evt) bdrv_debug_event(bs, evt)
void bdrv_debug_event(BlockDriverState *bs, BlkDebugEvent event);
int bdrv_debug_breakpoint(BlockDriverState *bs, const char *event,
const char *tag);
int bdrv_debug_remove_breakpoint(BlockDriverState *bs, const char *tag);
int bdrv_debug_resume(BlockDriverState *bs, const char *tag);
bool bdrv_debug_is_suspended(BlockDriverState *bs, const char *tag);
#endif

View File

@@ -1,39 +1,22 @@
block-obj-y += raw_bsd.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += raw.o cow.o qcow.o vdi.o vmdk.o cloop.o dmg.o bochs.o vpc.o vvfat.o
block-obj-y += qcow2.o qcow2-refcount.o qcow2-cluster.o qcow2-snapshot.o qcow2-cache.o
block-obj-y += qed.o qed-gencb.o qed-l2-cache.o qed-table.o qed-cluster.o
block-obj-y += qed-check.o
block-obj-$(CONFIG_VHDX) += vhdx.o vhdx-endian.o vhdx-log.o
block-obj-$(CONFIG_QUORUM) += quorum.o
block-obj-y += parallels.o blkdebug.o blkverify.o
block-obj-y += snapshot.o qapi.o
block-obj-$(CONFIG_WIN32) += raw-win32.o win32-aio.o
block-obj-$(CONFIG_POSIX) += raw-posix.o
block-obj-$(CONFIG_LINUX_AIO) += linux-aio.o
ifeq ($(CONFIG_POSIX),y)
block-obj-y += nbd.o nbd-client.o sheepdog.o
block-obj-y += nbd.o sheepdog.o
block-obj-$(CONFIG_LIBISCSI) += iscsi.o
block-obj-$(CONFIG_LIBNFS) += nfs.o
block-obj-$(CONFIG_CURL) += curl.o
block-obj-$(CONFIG_RBD) += rbd.o
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
block-obj-$(CONFIG_LIBSSH2) += ssh.o
endif
common-obj-y += stream.o
common-obj-y += commit.o
common-obj-y += mirror.o
common-obj-y += backup.o
iscsi.o-cflags := $(LIBISCSI_CFLAGS)
iscsi.o-libs := $(LIBISCSI_LIBS)
curl.o-cflags := $(CURL_CFLAGS)
curl.o-libs := $(CURL_LIBS)
rbd.o-cflags := $(RBD_CFLAGS)
rbd.o-libs := $(RBD_LIBS)
gluster.o-cflags := $(GLUSTERFS_CFLAGS)
gluster.o-libs := $(GLUSTERFS_LIBS)
ssh.o-cflags := $(LIBSSH2_CFLAGS)
ssh.o-libs := $(LIBSSH2_LIBS)
qcow.o-libs := -lz
linux-aio.o-libs := -laio
common-obj-y += dictzip.o
common-obj-y += tar.o

View File

@@ -1,392 +0,0 @@
/*
* QEMU backup
*
* Copyright (C) 2013 Proxmox Server Solutions
*
* Authors:
* Dietmar Maurer (dietmar@proxmox.com)
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
*
*/
#include <stdio.h>
#include <errno.h>
#include <unistd.h>
#include "trace.h"
#include "block/block.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "qemu/ratelimit.h"
#define BACKUP_CLUSTER_BITS 16
#define BACKUP_CLUSTER_SIZE (1 << BACKUP_CLUSTER_BITS)
#define BACKUP_SECTORS_PER_CLUSTER (BACKUP_CLUSTER_SIZE / BDRV_SECTOR_SIZE)
#define SLICE_TIME 100000000ULL /* ns */
typedef struct CowRequest {
int64_t start;
int64_t end;
QLIST_ENTRY(CowRequest) list;
CoQueue wait_queue; /* coroutines blocked on this request */
} CowRequest;
typedef struct BackupBlockJob {
BlockJob common;
BlockDriverState *target;
MirrorSyncMode sync_mode;
RateLimit limit;
BlockdevOnError on_source_error;
BlockdevOnError on_target_error;
CoRwlock flush_rwlock;
uint64_t sectors_read;
HBitmap *bitmap;
QLIST_HEAD(, CowRequest) inflight_reqs;
} BackupBlockJob;
/* See if in-flight requests overlap and wait for them to complete */
static void coroutine_fn wait_for_overlapping_requests(BackupBlockJob *job,
int64_t start,
int64_t end)
{
CowRequest *req;
bool retry;
do {
retry = false;
QLIST_FOREACH(req, &job->inflight_reqs, list) {
if (end > req->start && start < req->end) {
qemu_co_queue_wait(&req->wait_queue);
retry = true;
break;
}
}
} while (retry);
}
/* Keep track of an in-flight request */
static void cow_request_begin(CowRequest *req, BackupBlockJob *job,
int64_t start, int64_t end)
{
req->start = start;
req->end = end;
qemu_co_queue_init(&req->wait_queue);
QLIST_INSERT_HEAD(&job->inflight_reqs, req, list);
}
/* Forget about a completed request */
static void cow_request_end(CowRequest *req)
{
QLIST_REMOVE(req, list);
qemu_co_queue_restart_all(&req->wait_queue);
}
static int coroutine_fn backup_do_cow(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
bool *error_is_read)
{
BackupBlockJob *job = (BackupBlockJob *)bs->job;
CowRequest cow_request;
struct iovec iov;
QEMUIOVector bounce_qiov;
void *bounce_buffer = NULL;
int ret = 0;
int64_t start, end;
int n;
qemu_co_rwlock_rdlock(&job->flush_rwlock);
start = sector_num / BACKUP_SECTORS_PER_CLUSTER;
end = DIV_ROUND_UP(sector_num + nb_sectors, BACKUP_SECTORS_PER_CLUSTER);
trace_backup_do_cow_enter(job, start, sector_num, nb_sectors);
wait_for_overlapping_requests(job, start, end);
cow_request_begin(&cow_request, job, start, end);
for (; start < end; start++) {
if (hbitmap_get(job->bitmap, start)) {
trace_backup_do_cow_skip(job, start);
continue; /* already copied */
}
trace_backup_do_cow_process(job, start);
n = MIN(BACKUP_SECTORS_PER_CLUSTER,
job->common.len / BDRV_SECTOR_SIZE -
start * BACKUP_SECTORS_PER_CLUSTER);
if (!bounce_buffer) {
bounce_buffer = qemu_blockalign(bs, BACKUP_CLUSTER_SIZE);
}
iov.iov_base = bounce_buffer;
iov.iov_len = n * BDRV_SECTOR_SIZE;
qemu_iovec_init_external(&bounce_qiov, &iov, 1);
ret = bdrv_co_readv(bs, start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
if (ret < 0) {
trace_backup_do_cow_read_fail(job, start, ret);
if (error_is_read) {
*error_is_read = true;
}
goto out;
}
if (buffer_is_zero(iov.iov_base, iov.iov_len)) {
ret = bdrv_co_write_zeroes(job->target,
start * BACKUP_SECTORS_PER_CLUSTER,
n, BDRV_REQ_MAY_UNMAP);
} else {
ret = bdrv_co_writev(job->target,
start * BACKUP_SECTORS_PER_CLUSTER, n,
&bounce_qiov);
}
if (ret < 0) {
trace_backup_do_cow_write_fail(job, start, ret);
if (error_is_read) {
*error_is_read = false;
}
goto out;
}
hbitmap_set(job->bitmap, start, 1);
/* Publish progress, guest I/O counts as progress too. Note that the
* offset field is an opaque progress value, it is not a disk offset.
*/
job->sectors_read += n;
job->common.offset += n * BDRV_SECTOR_SIZE;
}
out:
if (bounce_buffer) {
qemu_vfree(bounce_buffer);
}
cow_request_end(&cow_request);
trace_backup_do_cow_return(job, sector_num, nb_sectors, ret);
qemu_co_rwlock_unlock(&job->flush_rwlock);
return ret;
}
static int coroutine_fn backup_before_write_notify(
NotifierWithReturn *notifier,
void *opaque)
{
BdrvTrackedRequest *req = opaque;
int64_t sector_num = req->offset >> BDRV_SECTOR_BITS;
int nb_sectors = req->bytes >> BDRV_SECTOR_BITS;
assert((req->offset & (BDRV_SECTOR_SIZE - 1)) == 0);
assert((req->bytes & (BDRV_SECTOR_SIZE - 1)) == 0);
return backup_do_cow(req->bs, sector_num, nb_sectors, NULL);
}
static void backup_set_speed(BlockJob *job, int64_t speed, Error **errp)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
if (speed < 0) {
error_set(errp, QERR_INVALID_PARAMETER, "speed");
return;
}
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static void backup_iostatus_reset(BlockJob *job)
{
BackupBlockJob *s = container_of(job, BackupBlockJob, common);
bdrv_iostatus_reset(s->target);
}
static const BlockJobDriver backup_job_driver = {
.instance_size = sizeof(BackupBlockJob),
.job_type = BLOCK_JOB_TYPE_BACKUP,
.set_speed = backup_set_speed,
.iostatus_reset = backup_iostatus_reset,
};
static BlockErrorAction backup_error_action(BackupBlockJob *job,
bool read, int error)
{
if (read) {
return block_job_error_action(&job->common, job->common.bs,
job->on_source_error, true, error);
} else {
return block_job_error_action(&job->common, job->target,
job->on_target_error, false, error);
}
}
static void coroutine_fn backup_run(void *opaque)
{
BackupBlockJob *job = opaque;
BlockDriverState *bs = job->common.bs;
BlockDriverState *target = job->target;
BlockdevOnError on_target_error = job->on_target_error;
NotifierWithReturn before_write = {
.notify = backup_before_write_notify,
};
int64_t start, end;
int ret = 0;
QLIST_INIT(&job->inflight_reqs);
qemu_co_rwlock_init(&job->flush_rwlock);
start = 0;
end = DIV_ROUND_UP(job->common.len / BDRV_SECTOR_SIZE,
BACKUP_SECTORS_PER_CLUSTER);
job->bitmap = hbitmap_alloc(end, 0);
bdrv_set_enable_write_cache(target, true);
bdrv_set_on_error(target, on_target_error, on_target_error);
bdrv_iostatus_enable(target);
bdrv_add_before_write_notifier(bs, &before_write);
if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
while (!block_job_is_cancelled(&job->common)) {
/* Yield until the job is cancelled. We just let our before_write
* notify callback service CoW requests. */
job->common.busy = false;
qemu_coroutine_yield();
job->common.busy = true;
}
} else {
/* Both FULL and TOP SYNC_MODE's require copying.. */
for (; start < end; start++) {
bool error_is_read;
if (block_job_is_cancelled(&job->common)) {
break;
}
/* we need to yield so that qemu_aio_flush() returns.
* (without, VM does not reboot)
*/
if (job->common.speed) {
uint64_t delay_ns = ratelimit_calculate_delay(
&job->limit, job->sectors_read);
job->sectors_read = 0;
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, delay_ns);
} else {
block_job_sleep_ns(&job->common, QEMU_CLOCK_REALTIME, 0);
}
if (block_job_is_cancelled(&job->common)) {
break;
}
if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
int i, n;
int alloced = 0;
/* Check to see if these blocks are already in the
* backing file. */
for (i = 0; i < BACKUP_SECTORS_PER_CLUSTER;) {
/* bdrv_is_allocated() only returns true/false based
* on the first set of sectors it comes across that
* are are all in the same state.
* For that reason we must verify each sector in the
* backup cluster length. We end up copying more than
* needed but at some point that is always the case. */
alloced =
bdrv_is_allocated(bs,
start * BACKUP_SECTORS_PER_CLUSTER + i,
BACKUP_SECTORS_PER_CLUSTER - i, &n);
i += n;
if (alloced == 1) {
break;
}
}
/* If the above loop never found any sectors that are in
* the topmost image, skip this backup. */
if (alloced == 0) {
continue;
}
}
/* FULL sync mode we copy the whole drive. */
ret = backup_do_cow(bs, start * BACKUP_SECTORS_PER_CLUSTER,
BACKUP_SECTORS_PER_CLUSTER, &error_is_read);
if (ret < 0) {
/* Depending on error action, fail now or retry cluster */
BlockErrorAction action =
backup_error_action(job, error_is_read, -ret);
if (action == BDRV_ACTION_REPORT) {
break;
} else {
start--;
continue;
}
}
}
}
notifier_with_return_remove(&before_write);
/* wait until pending backup_do_cow() calls have completed */
qemu_co_rwlock_wrlock(&job->flush_rwlock);
qemu_co_rwlock_unlock(&job->flush_rwlock);
hbitmap_free(job->bitmap);
bdrv_iostatus_disable(target);
bdrv_unref(target);
block_job_completed(&job->common, ret);
}
void backup_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode sync_mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb, void *opaque,
Error **errp)
{
int64_t len;
assert(bs);
assert(target);
assert(cb);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_set(errp, QERR_INVALID_PARAMETER, "on-source-error");
return;
}
len = bdrv_getlength(bs);
if (len < 0) {
error_setg_errno(errp, -len, "unable to get length for '%s'",
bdrv_get_device_name(bs));
return;
}
BackupBlockJob *job = block_job_create(&backup_job_driver, bs, speed,
cb, opaque, errp);
if (!job) {
return;
}
job->on_source_error = on_source_error;
job->on_target_error = on_target_error;
job->target = target;
job->sync_mode = sync_mode;
job->common.len = len;
job->common.co = qemu_coroutine_create(backup_run);
qemu_coroutine_enter(job->common.co, job);
}

View File

@@ -23,17 +23,14 @@
*/
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
typedef struct BDRVBlkdebugState {
int state;
int new_state;
QLIST_HEAD(, BlkdebugRule) rules[BLKDBG_EVENT_MAX];
QSIMPLEQ_HEAD(, BlkdebugRule) active_rules;
QLIST_HEAD(, BlkdebugSuspendedReq) suspended_reqs;
} BDRVBlkdebugState;
typedef struct BlkdebugAIOCB {
@@ -42,12 +39,6 @@ typedef struct BlkdebugAIOCB {
int ret;
} BlkdebugAIOCB;
typedef struct BlkdebugSuspendedReq {
Coroutine *co;
char *tag;
QLIST_ENTRY(BlkdebugSuspendedReq) next;
} BlkdebugSuspendedReq;
static void blkdebug_aio_cancel(BlockDriverAIOCB *blockacb);
static const AIOCBInfo blkdebug_aiocb_info = {
@@ -58,7 +49,6 @@ static const AIOCBInfo blkdebug_aiocb_info = {
enum {
ACTION_INJECT_ERROR,
ACTION_SET_STATE,
ACTION_SUSPEND,
};
typedef struct BlkdebugRule {
@@ -75,9 +65,6 @@ typedef struct BlkdebugRule {
struct {
int new_state;
} set_state;
struct {
char *tag;
} suspend;
} options;
QLIST_ENTRY(BlkdebugRule) next;
QSIMPLEQ_ENTRY(BlkdebugRule) active_next;
@@ -168,7 +155,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_REFTABLE_LOAD] = "reftable_load",
[BLKDBG_REFTABLE_GROW] = "reftable_grow",
[BLKDBG_REFTABLE_UPDATE] = "reftable_update",
[BLKDBG_REFBLOCK_LOAD] = "refblock_load",
[BLKDBG_REFBLOCK_UPDATE] = "refblock_update",
@@ -183,17 +169,6 @@ static const char *event_names[BLKDBG_EVENT_MAX] = {
[BLKDBG_CLUSTER_ALLOC] = "cluster_alloc",
[BLKDBG_CLUSTER_ALLOC_BYTES] = "cluster_alloc_bytes",
[BLKDBG_CLUSTER_FREE] = "cluster_free",
[BLKDBG_FLUSH_TO_OS] = "flush_to_os",
[BLKDBG_FLUSH_TO_DISK] = "flush_to_disk",
[BLKDBG_PWRITEV_RMW_HEAD] = "pwritev_rmw.head",
[BLKDBG_PWRITEV_RMW_AFTER_HEAD] = "pwritev_rmw.after_head",
[BLKDBG_PWRITEV_RMW_TAIL] = "pwritev_rmw.tail",
[BLKDBG_PWRITEV_RMW_AFTER_TAIL] = "pwritev_rmw.after_tail",
[BLKDBG_PWRITEV] = "pwritev",
[BLKDBG_PWRITEV_ZERO] = "pwritev_zero",
[BLKDBG_PWRITEV_DONE] = "pwritev_done",
};
static int get_event_by_name(const char *name, BlkDebugEvent *event)
@@ -251,11 +226,6 @@ static int add_rule(QemuOpts *opts, void *opaque)
rule->options.set_state.new_state =
qemu_opt_get_number(opts, "new_state", 0);
break;
case ACTION_SUSPEND:
rule->options.suspend.tag =
g_strdup(qemu_opt_get(opts, "tag"));
break;
};
/* Add the rule */
@@ -264,48 +234,19 @@ static int add_rule(QemuOpts *opts, void *opaque)
return 0;
}
static void remove_rule(BlkdebugRule *rule)
static int read_config(BDRVBlkdebugState *s, const char *filename)
{
switch (rule->action) {
case ACTION_INJECT_ERROR:
case ACTION_SET_STATE:
break;
case ACTION_SUSPEND:
g_free(rule->options.suspend.tag);
break;
}
QLIST_REMOVE(rule, next);
g_free(rule);
}
static int read_config(BDRVBlkdebugState *s, const char *filename,
QDict *options, Error **errp)
{
FILE *f = NULL;
FILE *f;
int ret;
struct add_rule_data d;
Error *local_err = NULL;
if (filename) {
f = fopen(filename, "r");
if (f == NULL) {
error_setg_errno(errp, errno, "Could not read blkdebug config file");
return -errno;
}
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
error_setg(errp, "Could not parse blkdebug config file");
ret = -EINVAL;
goto fail;
}
f = fopen(filename, "r");
if (f == NULL) {
return -errno;
}
qemu_config_parse_qdict(options, config_groups, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
ret = qemu_config_parse(f, config_groups, filename);
if (ret < 0) {
goto fail;
}
@@ -320,122 +261,48 @@ static int read_config(BDRVBlkdebugState *s, const char *filename,
fail:
qemu_opts_reset(&inject_error_opts);
qemu_opts_reset(&set_state_opts);
if (f) {
fclose(f);
}
fclose(f);
return ret;
}
/* Valid blkdebug filenames look like blkdebug:path/to/config:path/to/image */
static void blkdebug_parse_filename(const char *filename, QDict *options,
Error **errp)
{
const char *c;
/* Parse the blkdebug: prefix */
if (!strstart(filename, "blkdebug:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
return;
}
/* Parse config file path */
c = strchr(filename, ':');
if (c == NULL) {
error_setg(errp, "blkdebug requires both config file and image path");
return;
}
if (c != filename) {
QString *config_path;
config_path = qstring_from_substr(filename, 0, c - filename - 1);
qdict_put(options, "config", config_path);
}
/* TODO Allow multi-level nesting and set file.filename here */
filename = c + 1;
qdict_put(options, "x-image", qstring_from_str(filename));
}
static QemuOptsList runtime_opts = {
.name = "blkdebug",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "config",
.type = QEMU_OPT_STRING,
.help = "Path to the configuration file",
},
{
.name = "x-image",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "align",
.type = QEMU_OPT_SIZE,
.help = "Required alignment in bytes",
},
{ /* end of list */ }
},
};
static int blkdebug_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int blkdebug_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVBlkdebugState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
const char *config;
uint64_t align;
int ret;
char *config, *c;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
/* Parse the blkdebug: prefix */
if (strncmp(filename, "blkdebug:", strlen("blkdebug:"))) {
return -EINVAL;
}
filename += strlen("blkdebug:");
/* Read rules from config file */
c = strchr(filename, ':');
if (c == NULL) {
return -EINVAL;
}
/* Read rules from config file or command line options */
config = qemu_opt_get(opts, "config");
ret = read_config(s, config, options, errp);
if (ret) {
goto out;
config = g_strdup(filename);
config[c - filename] = '\0';
ret = read_config(s, config);
g_free(config);
if (ret < 0) {
return ret;
}
filename = c + 1;
/* Set initial state */
s->state = 1;
/* Open the backing file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-image"), options, "image",
flags | BDRV_O_PROTOCOL, false, &local_err);
ret = bdrv_file_open(&bs->file, filename, flags);
if (ret < 0) {
error_propagate(errp, local_err);
goto out;
return ret;
}
/* Set request alignment */
align = qemu_opt_get_size(opts, "align", bs->request_alignment);
if (align > 0 && align < INT_MAX && !(align & (align - 1))) {
bs->request_alignment = align;
} else {
error_setg(errp, "Invalid alignment");
ret = -EINVAL;
goto fail_unref;
}
ret = 0;
goto out;
fail_unref:
bdrv_unref(bs->file);
out:
qemu_opts_del(opts);
return ret;
return 0;
}
static void error_callback_bh(void *opaque)
@@ -522,7 +389,6 @@ static BlockDriverAIOCB *blkdebug_aio_writev(BlockDriverState *bs,
return bdrv_aio_writev(bs->file, sector_num, qiov, nb_sectors, cb, opaque);
}
static void blkdebug_close(BlockDriverState *bs)
{
BDRVBlkdebugState *s = bs->opaque;
@@ -531,32 +397,12 @@ static void blkdebug_close(BlockDriverState *bs)
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
remove_rule(rule);
QLIST_REMOVE(rule, next);
g_free(rule);
}
}
}
static void suspend_request(BlockDriverState *bs, BlkdebugRule *rule)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq r;
r = (BlkdebugSuspendedReq) {
.co = qemu_coroutine_self(),
.tag = g_strdup(rule->options.suspend.tag),
};
remove_rule(rule);
QLIST_INSERT_HEAD(&s->suspended_reqs, &r, next);
printf("blkdebug: Suspended request '%s'\n", r.tag);
qemu_coroutine_yield();
printf("blkdebug: Resuming request '%s'\n", r.tag);
QLIST_REMOVE(&r, next);
g_free(r.tag);
}
static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
bool injected)
{
@@ -580,10 +426,6 @@ static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
case ACTION_SET_STATE:
s->new_state = rule->options.set_state.new_state;
break;
case ACTION_SUSPEND:
suspend_request(bs, rule);
break;
}
return injected;
}
@@ -591,121 +433,38 @@ static bool process_rule(BlockDriverState *bs, struct BlkdebugRule *rule,
static void blkdebug_debug_event(BlockDriverState *bs, BlkDebugEvent event)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule, *next;
struct BlkdebugRule *rule;
bool injected;
assert((int)event >= 0 && event < BLKDBG_EVENT_MAX);
injected = false;
s->new_state = s->state;
QLIST_FOREACH_SAFE(rule, &s->rules[event], next, next) {
QLIST_FOREACH(rule, &s->rules[event], next) {
injected = process_rule(bs, rule, injected);
}
s->state = s->new_state;
}
static int blkdebug_debug_breakpoint(BlockDriverState *bs, const char *event,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
struct BlkdebugRule *rule;
BlkDebugEvent blkdebug_event;
if (get_event_by_name(event, &blkdebug_event) < 0) {
return -ENOENT;
}
rule = g_malloc(sizeof(*rule));
*rule = (struct BlkdebugRule) {
.event = blkdebug_event,
.action = ACTION_SUSPEND,
.state = 0,
.options.suspend.tag = g_strdup(tag),
};
QLIST_INSERT_HEAD(&s->rules[blkdebug_event], rule, next);
return 0;
}
static int blkdebug_debug_resume(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *next;
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
return 0;
}
}
return -ENOENT;
}
static int blkdebug_debug_remove_breakpoint(BlockDriverState *bs,
const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r, *r_next;
BlkdebugRule *rule, *next;
int i, ret = -ENOENT;
for (i = 0; i < BLKDBG_EVENT_MAX; i++) {
QLIST_FOREACH_SAFE(rule, &s->rules[i], next, next) {
if (rule->action == ACTION_SUSPEND &&
!strcmp(rule->options.suspend.tag, tag)) {
remove_rule(rule);
ret = 0;
}
}
}
QLIST_FOREACH_SAFE(r, &s->suspended_reqs, next, r_next) {
if (!strcmp(r->tag, tag)) {
qemu_coroutine_enter(r->co, NULL);
ret = 0;
}
}
return ret;
}
static bool blkdebug_debug_is_suspended(BlockDriverState *bs, const char *tag)
{
BDRVBlkdebugState *s = bs->opaque;
BlkdebugSuspendedReq *r;
QLIST_FOREACH(r, &s->suspended_reqs, next) {
if (!strcmp(r->tag, tag)) {
return true;
}
}
return false;
}
static int64_t blkdebug_getlength(BlockDriverState *bs)
{
return bdrv_getlength(bs->file);
}
static BlockDriver bdrv_blkdebug = {
.format_name = "blkdebug",
.protocol_name = "blkdebug",
.instance_size = sizeof(BDRVBlkdebugState),
.format_name = "blkdebug",
.protocol_name = "blkdebug",
.bdrv_parse_filename = blkdebug_parse_filename,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.instance_size = sizeof(BDRVBlkdebugState),
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_file_open = blkdebug_open,
.bdrv_close = blkdebug_close,
.bdrv_getlength = blkdebug_getlength,
.bdrv_debug_event = blkdebug_debug_event,
.bdrv_debug_breakpoint = blkdebug_debug_breakpoint,
.bdrv_debug_remove_breakpoint
= blkdebug_debug_remove_breakpoint,
.bdrv_debug_resume = blkdebug_debug_resume,
.bdrv_debug_is_suspended = blkdebug_debug_is_suspended,
.bdrv_aio_readv = blkdebug_aio_readv,
.bdrv_aio_writev = blkdebug_aio_writev,
.bdrv_debug_event = blkdebug_debug_event,
};
static void bdrv_blkdebug_init(void)

View File

@@ -8,8 +8,8 @@
*/
#include <stdarg.h>
#include "qemu/sockets.h" /* for EINPROGRESS on Windows */
#include "block/block_int.h"
#include "qemu_socket.h" /* for EINPROGRESS on Windows */
#include "block_int.h"
typedef struct {
BlockDriverState *test_file;
@@ -69,100 +69,50 @@ static void GCC_FMT_ATTR(2, 3) blkverify_err(BlkverifyAIOCB *acb,
}
/* Valid blkverify filenames look like blkverify:path/to/raw_image:path/to/image */
static void blkverify_parse_filename(const char *filename, QDict *options,
Error **errp)
static int blkverify_open(BlockDriverState *bs, const char *filename, int flags)
{
const char *c;
QString *raw_path;
BDRVBlkverifyState *s = bs->opaque;
int ret;
char *raw, *c;
/* Parse the blkverify: prefix */
if (!strstart(filename, "blkverify:", &filename)) {
/* There was no prefix; therefore, all options have to be already
present in the QDict (except for the filename) */
qdict_put(options, "x-image", qstring_from_str(filename));
return;
if (strncmp(filename, "blkverify:", strlen("blkverify:"))) {
return -EINVAL;
}
filename += strlen("blkverify:");
/* Parse the raw image filename */
c = strchr(filename, ':');
if (c == NULL) {
error_setg(errp, "blkverify requires raw copy and original image path");
return;
return -EINVAL;
}
/* TODO Implement option pass-through and set raw.filename here */
raw_path = qstring_from_substr(filename, 0, c - filename - 1);
qdict_put(options, "x-raw", raw_path);
/* TODO Allow multi-level nesting and set file.filename here */
filename = c + 1;
qdict_put(options, "x-image", qstring_from_str(filename));
}
static QemuOptsList runtime_opts = {
.name = "blkverify",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "x-raw",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{
.name = "x-image",
.type = QEMU_OPT_STRING,
.help = "[internal use only, will be removed]",
},
{ /* end of list */ }
},
};
static int blkverify_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVBlkverifyState *s = bs->opaque;
QemuOpts *opts;
Error *local_err = NULL;
int ret;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto fail;
}
/* Open the raw file */
assert(bs->file == NULL);
ret = bdrv_open_image(&bs->file, qemu_opt_get(opts, "x-raw"), options,
"raw", flags | BDRV_O_PROTOCOL, false, &local_err);
raw = g_strdup(filename);
raw[c - filename] = '\0';
ret = bdrv_file_open(&bs->file, raw, flags);
g_free(raw);
if (ret < 0) {
error_propagate(errp, local_err);
goto fail;
return ret;
}
filename = c + 1;
/* Open the test file */
assert(s->test_file == NULL);
ret = bdrv_open_image(&s->test_file, qemu_opt_get(opts, "x-image"), options,
"test", flags, false, &local_err);
s->test_file = bdrv_new("");
ret = bdrv_open(s->test_file, filename, flags, NULL);
if (ret < 0) {
error_propagate(errp, local_err);
bdrv_delete(s->test_file);
s->test_file = NULL;
goto fail;
return ret;
}
ret = 0;
fail:
return ret;
return 0;
}
static void blkverify_close(BlockDriverState *bs)
{
BDRVBlkverifyState *s = bs->opaque;
bdrv_unref(s->test_file);
bdrv_delete(s->test_file);
s->test_file = NULL;
}
@@ -173,6 +123,110 @@ static int64_t blkverify_getlength(BlockDriverState *bs)
return bdrv_getlength(s->test_file);
}
/**
* Check that I/O vector contents are identical
*
* @a: I/O vector
* @b: I/O vector
* @ret: Offset to first mismatching byte or -1 if match
*/
static ssize_t blkverify_iovec_compare(QEMUIOVector *a, QEMUIOVector *b)
{
int i;
ssize_t offset = 0;
assert(a->niov == b->niov);
for (i = 0; i < a->niov; i++) {
size_t len = 0;
uint8_t *p = (uint8_t *)a->iov[i].iov_base;
uint8_t *q = (uint8_t *)b->iov[i].iov_base;
assert(a->iov[i].iov_len == b->iov[i].iov_len);
while (len < a->iov[i].iov_len && *p++ == *q++) {
len++;
}
offset += len;
if (len != a->iov[i].iov_len) {
return offset;
}
}
return -1;
}
typedef struct {
int src_index;
struct iovec *src_iov;
void *dest_base;
} IOVectorSortElem;
static int sortelem_cmp_src_base(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
/* Don't overflow */
if (elem_a->src_iov->iov_base < elem_b->src_iov->iov_base) {
return -1;
} else if (elem_a->src_iov->iov_base > elem_b->src_iov->iov_base) {
return 1;
} else {
return 0;
}
}
static int sortelem_cmp_src_index(const void *a, const void *b)
{
const IOVectorSortElem *elem_a = a;
const IOVectorSortElem *elem_b = b;
return elem_a->src_index - elem_b->src_index;
}
/**
* Copy contents of I/O vector
*
* The relative relationships of overlapping iovecs are preserved. This is
* necessary to ensure identical semantics in the cloned I/O vector.
*/
static void blkverify_iovec_clone(QEMUIOVector *dest, const QEMUIOVector *src,
void *buf)
{
IOVectorSortElem sortelems[src->niov];
void *last_end;
int i;
/* Sort by source iovecs by base address */
for (i = 0; i < src->niov; i++) {
sortelems[i].src_index = i;
sortelems[i].src_iov = &src->iov[i];
}
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_base);
/* Allocate buffer space taking into account overlapping iovecs */
last_end = NULL;
for (i = 0; i < src->niov; i++) {
struct iovec *cur = sortelems[i].src_iov;
ptrdiff_t rewind = 0;
/* Detect overlap */
if (last_end && last_end > cur->iov_base) {
rewind = last_end - cur->iov_base;
}
sortelems[i].dest_base = buf - rewind;
buf += cur->iov_len - MIN(rewind, cur->iov_len);
last_end = MAX(cur->iov_base + cur->iov_len, last_end);
}
/* Sort by source iovec index and build destination iovec */
qsort(sortelems, src->niov, sizeof(sortelems[0]), sortelem_cmp_src_index);
for (i = 0; i < src->niov; i++) {
qemu_iovec_add(dest, sortelems[i].dest_base, src->iov[i].iov_len);
}
}
static BlkverifyAIOCB *blkverify_aio_get(BlockDriverState *bs, bool is_write,
int64_t sector_num, QEMUIOVector *qiov,
int nb_sectors,
@@ -236,7 +290,7 @@ static void blkverify_aio_cb(void *opaque, int ret)
static void blkverify_verify_readv(BlkverifyAIOCB *acb)
{
ssize_t offset = qemu_iovec_compare(acb->qiov, &acb->raw_qiov);
ssize_t offset = blkverify_iovec_compare(acb->qiov, &acb->raw_qiov);
if (offset != -1) {
blkverify_err(acb, "contents mismatch in sector %" PRId64,
acb->sector_num + (int64_t)(offset / BDRV_SECTOR_SIZE));
@@ -254,7 +308,7 @@ static BlockDriverAIOCB *blkverify_aio_readv(BlockDriverState *bs,
acb->verify = blkverify_verify_readv;
acb->buf = qemu_blockalign(bs->file, qiov->size);
qemu_iovec_init(&acb->raw_qiov, acb->qiov->niov);
qemu_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
blkverify_iovec_clone(&acb->raw_qiov, qiov, acb->buf);
bdrv_aio_readv(s->test_file, sector_num, qiov, nb_sectors,
blkverify_aio_cb, acb);
@@ -288,36 +342,20 @@ static BlockDriverAIOCB *blkverify_aio_flush(BlockDriverState *bs,
return bdrv_aio_flush(s->test_file, cb, opaque);
}
static bool blkverify_recurse_is_first_non_filter(BlockDriverState *bs,
BlockDriverState *candidate)
{
BDRVBlkverifyState *s = bs->opaque;
bool perm = bdrv_recurse_is_first_non_filter(bs->file, candidate);
if (perm) {
return true;
}
return bdrv_recurse_is_first_non_filter(s->test_file, candidate);
}
static BlockDriver bdrv_blkverify = {
.format_name = "blkverify",
.protocol_name = "blkverify",
.instance_size = sizeof(BDRVBlkverifyState),
.format_name = "blkverify",
.protocol_name = "blkverify",
.bdrv_parse_filename = blkverify_parse_filename,
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_getlength = blkverify_getlength,
.instance_size = sizeof(BDRVBlkverifyState),
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
.bdrv_getlength = blkverify_getlength,
.is_filter = true,
.bdrv_recurse_is_first_non_filter = blkverify_recurse_is_first_non_filter,
.bdrv_file_open = blkverify_open,
.bdrv_close = blkverify_close,
.bdrv_aio_readv = blkverify_aio_readv,
.bdrv_aio_writev = blkverify_aio_writev,
.bdrv_aio_flush = blkverify_aio_flush,
};
static void bdrv_blkverify_init(void)

View File

@@ -23,8 +23,8 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
/**************************************************************/
@@ -39,41 +39,56 @@
// not allocated: 0xffffffff
// always little-endian
struct bochs_header {
char magic[32]; /* "Bochs Virtual HD Image" */
char type[16]; /* "Redolog" */
char subtype[16]; /* "Undoable" / "Volatile" / "Growing" */
struct bochs_header_v1 {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; /* size of header */
uint32_t catalog; /* num of entries */
uint32_t bitmap; /* bitmap size */
uint32_t extent; /* extent size */
uint32_t header; // size of header
union {
struct {
uint32_t reserved; /* for ??? */
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 12];
} QEMU_PACKED redolog;
struct {
uint64_t disk; /* disk size */
char padding[HEADER_SIZE - 64 - 20 - 8];
} QEMU_PACKED redolog_v1;
char padding[HEADER_SIZE - 64 - 20];
struct {
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 20];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
} QEMU_PACKED;
};
// always little-endian
struct bochs_header {
char magic[32]; // "Bochs Virtual HD Image"
char type[16]; // "Redolog"
char subtype[16]; // "Undoable" / "Volatile" / "Growing"
uint32_t version;
uint32_t header; // size of header
union {
struct {
uint32_t catalog; // num of entries
uint32_t bitmap; // bitmap size
uint32_t extent; // extent size
uint32_t reserved; // for ???
uint64_t disk; // disk size
char padding[HEADER_SIZE - 64 - 8 - 24];
} redolog;
char padding[HEADER_SIZE - 64 - 8];
} extra;
};
typedef struct BDRVBochsState {
CoMutex lock;
uint32_t *catalog_bitmap;
uint32_t catalog_size;
int catalog_size;
uint32_t data_offset;
int data_offset;
uint32_t bitmap_blocks;
uint32_t extent_blocks;
uint32_t extent_size;
int bitmap_blocks;
int extent_blocks;
int extent_size;
} BDRVBochsState;
static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -93,19 +108,17 @@ static int bochs_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int bochs_open(BlockDriverState *bs, int flags)
{
BDRVBochsState *s = bs->opaque;
uint32_t i;
int i;
struct bochs_header bochs;
int ret;
struct bochs_header_v1 header_v1;
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &bochs, sizeof(bochs));
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 0, &bochs, sizeof(bochs)) != sizeof(bochs)) {
goto fail;
}
if (strcmp(bochs.magic, HEADER_MAGIC) ||
@@ -113,103 +126,63 @@ static int bochs_open(BlockDriverState *bs, QDict *options, int flags,
strcmp(bochs.subtype, GROWING_TYPE) ||
((le32_to_cpu(bochs.version) != HEADER_VERSION) &&
(le32_to_cpu(bochs.version) != HEADER_V1))) {
error_setg(errp, "Image not in Bochs format");
return -EINVAL;
}
if (le32_to_cpu(bochs.version) == HEADER_V1) {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog_v1.disk) / 512;
} else {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog.disk) / 512;
}
/* Limit to 1M entries to avoid unbounded allocation. This is what is
* needed for the largest image that bximage can create (~8 TB). */
s->catalog_size = le32_to_cpu(bochs.catalog);
if (s->catalog_size > 0x100000) {
error_setg(errp, "Catalog size is too large");
return -EFBIG;
}
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
ret = bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4);
if (ret < 0) {
goto fail;
}
if (le32_to_cpu(bochs.version) == HEADER_V1) {
memcpy(&header_v1, &bochs, sizeof(bochs));
bs->total_sectors = le64_to_cpu(header_v1.extra.redolog.disk) / 512;
} else {
bs->total_sectors = le64_to_cpu(bochs.extra.redolog.disk) / 512;
}
s->catalog_size = le32_to_cpu(bochs.extra.redolog.catalog);
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
if (bdrv_pread(bs->file, le32_to_cpu(bochs.header), s->catalog_bitmap,
s->catalog_size * 4) != s->catalog_size * 4)
goto fail;
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
s->data_offset = le32_to_cpu(bochs.header) + (s->catalog_size * 4);
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extent) - 1) / 512;
s->bitmap_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.bitmap) - 1) / 512;
s->extent_blocks = 1 + (le32_to_cpu(bochs.extra.redolog.extent) - 1) / 512;
s->extent_size = le32_to_cpu(bochs.extent);
if (s->extent_size < BDRV_SECTOR_SIZE) {
/* bximage actually never creates extents smaller than 4k */
error_setg(errp, "Extent size must be at least 512");
ret = -EINVAL;
goto fail;
} else if (!is_power_of_2(s->extent_size)) {
error_setg(errp, "Extent size %" PRIu32 " is not a power of two",
s->extent_size);
ret = -EINVAL;
goto fail;
} else if (s->extent_size > 0x800000) {
error_setg(errp, "Extent size %" PRIu32 " is too large",
s->extent_size);
ret = -EINVAL;
goto fail;
}
if (s->catalog_size < DIV_ROUND_UP(bs->total_sectors,
s->extent_size / BDRV_SECTOR_SIZE))
{
error_setg(errp, "Catalog size is too small for this disk size");
ret = -EINVAL;
goto fail;
}
s->extent_size = le32_to_cpu(bochs.extra.redolog.extent);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->catalog_bitmap);
return ret;
fail:
return -1;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)
{
BDRVBochsState *s = bs->opaque;
uint64_t offset = sector_num * 512;
uint64_t extent_index, extent_offset, bitmap_offset;
int64_t offset = sector_num * 512;
int64_t extent_index, extent_offset, bitmap_offset;
char bitmap_entry;
int ret;
// seek to sector
extent_index = offset / s->extent_size;
extent_offset = (offset % s->extent_size) / 512;
if (s->catalog_bitmap[extent_index] == 0xffffffff) {
return 0; /* not allocated */
return -1; /* not allocated */
}
bitmap_offset = s->data_offset +
(512 * (uint64_t) s->catalog_bitmap[extent_index] *
(s->extent_blocks + s->bitmap_blocks));
bitmap_offset = s->data_offset + (512 * s->catalog_bitmap[extent_index] *
(s->extent_blocks + s->bitmap_blocks));
/* read in bitmap for current extent */
ret = bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, bitmap_offset + (extent_offset / 8),
&bitmap_entry, 1) != 1) {
return -1;
}
if (!((bitmap_entry >> (extent_offset % 8)) & 1)) {
return 0; /* not allocated */
return -1; /* not allocated */
}
return bitmap_offset + (512 * (s->bitmap_blocks + extent_offset));
@@ -222,16 +195,13 @@ static int bochs_read(BlockDriverState *bs, int64_t sector_num,
while (nb_sectors > 0) {
int64_t block_offset = seek_to_sector(bs, sector_num);
if (block_offset < 0) {
return block_offset;
} else if (block_offset > 0) {
if (block_offset >= 0) {
ret = bdrv_pread(bs->file, block_offset, buf, 512);
if (ret < 0) {
return ret;
if (ret != 512) {
return -1;
}
} else {
} else
memset(buf, 0, 512);
}
nb_sectors--;
sector_num++;
buf += 512;

View File

@@ -22,13 +22,10 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
#include <zlib.h>
/* Maximum compressed block size */
#define MAX_BLOCK_SIZE (64 * 1024 * 1024)
typedef struct BDRVCloopState {
CoMutex lock;
uint32_t block_size;
@@ -56,104 +53,38 @@ static int cloop_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int cloop_open(BlockDriverState *bs, int flags)
{
BDRVCloopState *s = bs->opaque;
uint32_t offsets_size, max_compressed_block_size = 1, i;
int ret;
bs->read_only = 1;
/* read header */
ret = bdrv_pread(bs->file, 128, &s->block_size, 4);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 128, &s->block_size, 4) < 4) {
goto cloop_close;
}
s->block_size = be32_to_cpu(s->block_size);
if (s->block_size % 512) {
error_setg(errp, "block_size %" PRIu32 " must be a multiple of 512",
s->block_size);
return -EINVAL;
}
if (s->block_size == 0) {
error_setg(errp, "block_size cannot be zero");
return -EINVAL;
}
/* cloop's create_compressed_fs.c warns about block sizes beyond 256 KB but
* we can accept more. Prevent ridiculous values like 4 GB - 1 since we
* need a buffer this big.
*/
if (s->block_size > MAX_BLOCK_SIZE) {
error_setg(errp, "block_size %" PRIu32 " must be %u MB or less",
s->block_size,
MAX_BLOCK_SIZE / (1024 * 1024));
return -EINVAL;
}
ret = bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4);
if (ret < 0) {
return ret;
if (bdrv_pread(bs->file, 128 + 4, &s->n_blocks, 4) < 4) {
goto cloop_close;
}
s->n_blocks = be32_to_cpu(s->n_blocks);
/* read offsets */
if (s->n_blocks > (UINT32_MAX - 1) / sizeof(uint64_t)) {
/* Prevent integer overflow */
error_setg(errp, "n_blocks %" PRIu32 " must be %zu or less",
s->n_blocks,
(UINT32_MAX - 1) / sizeof(uint64_t));
return -EINVAL;
}
offsets_size = (s->n_blocks + 1) * sizeof(uint64_t);
if (offsets_size > 512 * 1024 * 1024) {
/* Prevent ridiculous offsets_size which causes memory allocation to
* fail or overflows bdrv_pread() size. In practice the 512 MB
* offsets[] limit supports 16 TB images at 256 KB block size.
*/
error_setg(errp, "image requires too many offsets, "
"try increasing block size");
return -EINVAL;
}
offsets_size = s->n_blocks * sizeof(uint64_t);
s->offsets = g_malloc(offsets_size);
ret = bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size);
if (ret < 0) {
goto fail;
if (bdrv_pread(bs->file, 128 + 4 + 4, s->offsets, offsets_size) <
offsets_size) {
goto cloop_close;
}
for (i = 0; i < s->n_blocks + 1; i++) {
uint64_t size;
for(i=0;i<s->n_blocks;i++) {
s->offsets[i] = be64_to_cpu(s->offsets[i]);
if (i == 0) {
continue;
}
if (s->offsets[i] < s->offsets[i - 1]) {
error_setg(errp, "offsets not monotonically increasing at "
"index %" PRIu32 ", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
size = s->offsets[i] - s->offsets[i - 1];
/* Compressed blocks should be smaller than the uncompressed block size
* but maybe compression performed poorly so the compressed block is
* actually bigger. Clamp down on unrealistic values to prevent
* ridiculous s->compressed_block allocation.
*/
if (size > 2 * MAX_BLOCK_SIZE) {
error_setg(errp, "invalid compressed block size at index %" PRIu32
", image file is corrupt", i);
ret = -EINVAL;
goto fail;
}
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
if (i > 0) {
uint32_t size = s->offsets[i] - s->offsets[i - 1];
if (size > max_compressed_block_size) {
max_compressed_block_size = size;
}
}
}
@@ -161,8 +92,7 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
s->compressed_block = g_malloc(max_compressed_block_size + 1);
s->uncompressed_block = g_malloc(s->block_size);
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
goto cloop_close;
}
s->current_block = s->n_blocks;
@@ -171,11 +101,8 @@ static int cloop_open(BlockDriverState *bs, QDict *options, int flags,
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->offsets);
g_free(s->compressed_block);
g_free(s->uncompressed_block);
return ret;
cloop_close:
return -1;
}
static inline int cloop_read_block(BlockDriverState *bs, int block_num)
@@ -243,7 +170,9 @@ static coroutine_fn int cloop_co_read(BlockDriverState *bs, int64_t sector_num,
static void cloop_close(BlockDriverState *bs)
{
BDRVCloopState *s = bs->opaque;
g_free(s->offsets);
if (s->n_blocks > 0) {
g_free(s->offsets);
}
g_free(s->compressed_block);
g_free(s->uncompressed_block);
inflateEnd(&s->zstream);

View File

@@ -13,8 +13,8 @@
*/
#include "trace.h"
#include "block/block_int.h"
#include "block/blockjob.h"
#include "block_int.h"
#include "blockjob.h"
#include "qemu/ratelimit.h"
enum {
@@ -65,7 +65,7 @@ static void coroutine_fn commit_run(void *opaque)
BlockDriverState *active = s->active;
BlockDriverState *top = s->top;
BlockDriverState *base = s->base;
BlockDriverState *overlay_bs;
BlockDriverState *overlay_bs = NULL;
int64_t sector_num, end;
int ret = 0;
int n = 0;
@@ -92,6 +92,8 @@ static void coroutine_fn commit_run(void *opaque)
}
}
overlay_bs = bdrv_find_overlay(active, top);
end = s->common.len >> BDRV_SECTOR_BITS;
buf = qemu_blockalign(top, COMMIT_BUFFER_SIZE);
@@ -101,16 +103,16 @@ static void coroutine_fn commit_run(void *opaque)
wait:
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that bdrv_drain_all() returns.
* with no pending I/O here so that qemu_aio_flush() returns.
*/
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
/* Copy if allocated above the base */
ret = bdrv_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
ret = bdrv_co_is_allocated_above(top, base, sector_num,
COMMIT_BUFFER_SIZE / BDRV_SECTOR_SIZE,
&n);
copy = (ret == 1);
trace_commit_one_iteration(s, sector_num, n, ret);
if (copy) {
@@ -154,8 +156,7 @@ exit_restore_reopen:
if (s->base_flags != bdrv_get_flags(base)) {
bdrv_reopen(base, s->base_flags, NULL);
}
overlay_bs = bdrv_find_overlay(active, top);
if (overlay_bs && s->orig_overlay_flags != bdrv_get_flags(overlay_bs)) {
if (s->orig_overlay_flags != bdrv_get_flags(overlay_bs)) {
bdrv_reopen(overlay_bs, s->orig_overlay_flags, NULL);
}
@@ -173,9 +174,9 @@ static void commit_set_speed(BlockJob *job, int64_t speed, Error **errp)
ratelimit_set_speed(&s->limit, speed / BDRV_SECTOR_SIZE, SLICE_TIME);
}
static const BlockJobDriver commit_job_driver = {
static BlockJobType commit_job_type = {
.instance_size = sizeof(CommitBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.job_type = "commit",
.set_speed = commit_set_speed,
};
@@ -194,11 +195,17 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
if ((on_error == BLOCKDEV_ON_ERROR_STOP ||
on_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
error_setg(errp, "Invalid parameter combination");
error_set(errp, QERR_INVALID_PARAMETER_COMBINATION);
return;
}
/* Once we support top == active layer, remove this check */
if (top == bs) {
error_setg(errp,
"Top image as the active layer is currently unsupported");
return;
}
assert(top != bs);
if (top == base) {
error_setg(errp, "Invalid files for merge: top and base are the same");
return;
@@ -232,7 +239,7 @@ void commit_start(BlockDriverState *bs, BlockDriverState *base,
}
s = block_job_create(&commit_job_driver, bs, speed, cb, opaque, errp);
s = block_job_create(&commit_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}

View File

@@ -22,8 +22,8 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
/**************************************************************/
/* COW block driver using file system holes */
@@ -58,8 +58,7 @@ static int cow_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int cow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int cow_open(BlockDriverState *bs, int flags)
{
BDRVCowState *s = bs->opaque;
struct cow_header_v2 cow_header;
@@ -74,7 +73,6 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
}
if (be32_to_cpu(cow_header.magic) != COW_MAGIC) {
error_setg(errp, "Image not in COW format");
ret = -EINVAL;
goto fail;
}
@@ -82,8 +80,8 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
if (be32_to_cpu(cow_header.version) != COW_VERSION) {
char version[64];
snprintf(version, sizeof(version),
"COW version %" PRIu32, cow_header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
"COW version %d", cow_header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "cow", version);
ret = -ENOTSUP;
goto fail;
@@ -104,45 +102,42 @@ static int cow_open(BlockDriverState *bs, QDict *options, int flags,
return ret;
}
static inline void cow_set_bits(uint8_t *bitmap, int start, int64_t nb_sectors)
/*
* XXX(hch): right now these functions are extremely inefficient.
* We should just read the whole bitmap we'll need in one go instead.
*/
static inline int cow_set_bit(BlockDriverState *bs, int64_t bitnum)
{
int64_t bitnum = start, last = start + nb_sectors;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitnum + 8 <= last) {
bitmap[bitnum / 8] = 0xFF;
bitnum += 8;
continue;
}
bitmap[bitnum/8] |= (1 << (bitnum % 8));
bitnum++;
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
bitmap |= (1 << (bitnum % 8));
ret = bdrv_pwrite_sync(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return 0;
}
#define BITS_PER_BITMAP_SECTOR (512 * 8)
/* Cannot use bitmap.c on big-endian machines. */
static int cow_test_bit(int64_t bitnum, const uint8_t *bitmap)
static inline int is_bit_set(BlockDriverState *bs, int64_t bitnum)
{
return (bitmap[bitnum / 8] & (1 << (bitnum & 7))) != 0;
}
uint64_t offset = sizeof(struct cow_header_v2) + bitnum / 8;
uint8_t bitmap;
int ret;
static int cow_find_streak(const uint8_t *bitmap, int value, int start, int nb_sectors)
{
int streak_value = value ? 0xFF : 0;
int last = MIN(start + nb_sectors, BITS_PER_BITMAP_SECTOR);
int bitnum = start;
while (bitnum < last) {
if ((bitnum & 7) == 0 && bitmap[bitnum / 8] == streak_value) {
bitnum += 8;
continue;
}
if (cow_test_bit(bitnum, bitmap) == value) {
bitnum++;
continue;
}
break;
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
return MIN(bitnum, last) - start;
return !!(bitmap & (1 << (bitnum % 8)));
}
/* Return true if first block has been changed (ie. current version is
@@ -151,100 +146,40 @@ static int cow_find_streak(const uint8_t *bitmap, int value, int start, int nb_s
static int coroutine_fn cow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int changed = 0, same = 0;
int changed;
do {
int ret;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
int sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
if (first) {
changed = cow_test_bit(bitnum, bitmap);
first = false;
}
same += cow_find_streak(bitmap, changed, bitnum, nb_sectors);
bitnum += sector_bits;
nb_sectors -= sector_bits;
offset += BDRV_SECTOR_SIZE;
} while (nb_sectors);
*num_same = same;
return changed;
}
static int64_t coroutine_fn cow_co_get_block_status(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *num_same)
{
BDRVCowState *s = bs->opaque;
int ret = cow_co_is_allocated(bs, sector_num, nb_sectors, num_same);
int64_t offset = s->cow_sectors_offset + (sector_num << BDRV_SECTOR_BITS);
if (ret < 0) {
return ret;
if (nb_sectors == 0) {
*num_same = nb_sectors;
return 0;
}
return (ret ? BDRV_BLOCK_DATA : 0) | offset | BDRV_BLOCK_OFFSET_VALID;
changed = is_bit_set(bs, sector_num);
if (changed < 0) {
return 0; /* XXX: how to return I/O errors? */
}
for (*num_same = 1; *num_same < nb_sectors; (*num_same)++) {
if (is_bit_set(bs, sector_num + *num_same) != changed)
break;
}
return changed;
}
static int cow_update_bitmap(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
int64_t bitnum = sector_num + sizeof(struct cow_header_v2) * 8;
uint64_t offset = (bitnum / 8) & -BDRV_SECTOR_SIZE;
bool first = true;
int sector_bits;
int error = 0;
int i;
for ( ; nb_sectors;
bitnum += sector_bits,
nb_sectors -= sector_bits,
offset += BDRV_SECTOR_SIZE) {
int ret, set;
uint8_t bitmap[BDRV_SECTOR_SIZE];
bitnum &= BITS_PER_BITMAP_SECTOR - 1;
sector_bits = MIN(nb_sectors, BITS_PER_BITMAP_SECTOR - bitnum);
ret = bdrv_pread(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
}
/* Skip over any already set bits */
set = cow_find_streak(bitmap, 1, bitnum, sector_bits);
bitnum += set;
sector_bits -= set;
nb_sectors -= set;
if (!sector_bits) {
continue;
}
if (first) {
ret = bdrv_flush(bs->file);
if (ret < 0) {
return ret;
}
first = false;
}
cow_set_bits(bitmap, bitnum, sector_bits);
ret = bdrv_pwrite(bs->file, offset, &bitmap, sizeof(bitmap));
if (ret < 0) {
return ret;
for (i = 0; i < nb_sectors; i++) {
error = cow_set_bit(bs, sector_num + i);
if (error) {
break;
}
}
return 0;
return error;
}
static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
@@ -254,11 +189,7 @@ static int coroutine_fn cow_read(BlockDriverState *bs, int64_t sector_num,
int ret, n;
while (nb_sectors > 0) {
ret = cow_co_is_allocated(bs, sector_num, nb_sectors, &n);
if (ret < 0) {
return ret;
}
if (ret) {
if (bdrv_co_is_allocated(bs, sector_num, nb_sectors, &n)) {
ret = bdrv_pread(bs->file,
s->cow_sectors_offset + sector_num * 512,
buf, n * 512);
@@ -324,14 +255,12 @@ static void cow_close(BlockDriverState *bs)
{
}
static int cow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int cow_create(const char *filename, QEMUOptionParameter *options)
{
struct cow_header_v2 cow_header;
struct stat st;
int64_t image_sectors = 0;
const char *image_filename = NULL;
Error *local_err = NULL;
int ret;
BlockDriverState *cow_bs;
@@ -345,17 +274,13 @@ static int cow_create(const char *filename, QEMUOptionParameter *options,
options++;
}
ret = bdrv_create_file(filename, options, &local_err);
ret = bdrv_create_file(filename, options);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
cow_bs = NULL;
ret = bdrv_open(&cow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&cow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
@@ -389,7 +314,7 @@ static int cow_create(const char *filename, QEMUOptionParameter *options,
}
exit:
bdrv_unref(cow_bs);
bdrv_delete(cow_bs);
return ret;
}
@@ -415,11 +340,10 @@ static BlockDriver bdrv_cow = {
.bdrv_open = cow_open,
.bdrv_close = cow_close,
.bdrv_create = cow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_read = cow_co_read,
.bdrv_write = cow_co_write,
.bdrv_co_get_block_status = cow_co_get_block_status,
.bdrv_co_is_allocated = cow_co_is_allocated,
.create_options = cow_create_options,
};

View File

@@ -22,7 +22,7 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "block_int.h"
#include <curl/curl.h>
// #define DEBUG
@@ -34,15 +34,6 @@
#define DPRINTF(fmt, ...) do { } while (0)
#endif
#if LIBCURL_VERSION_NUM >= 0x071000
/* The multi interface timer callback was introduced in 7.16.0 */
#define NEED_CURL_TIMER_CALLBACK
#endif
#define PROTOCOLS (CURLPROTO_HTTP | CURLPROTO_HTTPS | \
CURLPROTO_FTP | CURLPROTO_FTPS | \
CURLPROTO_TFTP)
#define CURL_NUM_STATES 8
#define CURL_NUM_ACB 8
#define SECTOR_SIZE 512
@@ -71,7 +62,6 @@ typedef struct CURLState
struct BDRVCURLState *s;
CURLAIOCB *acb[CURL_NUM_ACB];
CURL *curl;
curl_socket_t sock_fd;
char *orig_buf;
size_t buf_start;
size_t buf_off;
@@ -83,70 +73,47 @@ typedef struct CURLState
typedef struct BDRVCURLState {
CURLM *multi;
QEMUTimer timer;
size_t len;
CURLState states[CURL_NUM_STATES];
char *url;
size_t readahead_size;
bool accept_range;
} BDRVCURLState;
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
static int curl_timer_cb(CURLM *multi, long timeout_ms, void *opaque)
{
BDRVCURLState *s = opaque;
DPRINTF("CURL: timer callback timeout_ms %ld\n", timeout_ms);
if (timeout_ms == -1) {
timer_del(&s->timer);
} else {
int64_t timeout_ns = (int64_t)timeout_ms * 1000 * 1000;
timer_mod(&s->timer,
qemu_clock_get_ns(QEMU_CLOCK_REALTIME) + timeout_ns);
}
return 0;
}
#endif
static int curl_aio_flush(void *opaque);
static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
void *s, void *sp)
{
CURLState *state = NULL;
curl_easy_getinfo(curl, CURLINFO_PRIVATE, (char **)&state);
state->sock_fd = fd;
DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd);
switch (action) {
case CURL_POLL_IN:
qemu_aio_set_fd_handler(fd, curl_multi_read, NULL, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, NULL, curl_aio_flush, s);
break;
case CURL_POLL_OUT:
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, NULL, curl_multi_do, curl_aio_flush, s);
break;
case CURL_POLL_INOUT:
qemu_aio_set_fd_handler(fd, curl_multi_read, curl_multi_do, state);
qemu_aio_set_fd_handler(fd, curl_multi_do, curl_multi_do,
curl_aio_flush, s);
break;
case CURL_POLL_REMOVE:
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL);
qemu_aio_set_fd_handler(fd, NULL, NULL, NULL, NULL);
break;
}
return 0;
}
static size_t curl_header_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
static size_t curl_size_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
{
BDRVCURLState *s = opaque;
CURLState *s = ((CURLState*)opaque);
size_t realsize = size * nmemb;
const char *accept_line = "Accept-Ranges: bytes";
size_t fsize;
if (realsize >= strlen(accept_line)
&& strncmp((char *)ptr, accept_line, strlen(accept_line)) == 0) {
s->accept_range = true;
if(sscanf(ptr, "Content-Length: %zd", &fsize) == 1) {
s->s->len = fsize;
}
return realsize;
@@ -161,13 +128,8 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
DPRINTF("CURL: Just reading %zd bytes\n", realsize);
if (!s || !s->orig_buf)
return 0;
goto read_end;
if (s->buf_off >= s->buf_len) {
/* buffer full, read nothing */
return 0;
}
realsize = MIN(realsize, s->buf_len - s->buf_off);
memcpy(s->orig_buf + s->buf_off, ptr, realsize);
s->buf_off += realsize;
@@ -186,6 +148,7 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
}
}
read_end:
return realsize;
}
@@ -220,8 +183,7 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
}
// Wait for unfinished chunks
if (state->in_use &&
(start >= state->buf_start) &&
if ((start >= state->buf_start) &&
(start <= buf_fend) &&
(end >= state->buf_start) &&
(end <= buf_fend))
@@ -243,87 +205,61 @@ static int curl_find_buf(BDRVCURLState *s, size_t start, size_t len,
return FIND_RET_NONE;
}
static void curl_multi_check_completion(BDRVCURLState *s)
static void curl_multi_do(void *arg)
{
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
int r;
int msgs_in_queue;
if (!s->multi)
return;
do {
r = curl_multi_socket_all(s->multi, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
/* Try to find done transfers, so we can free the easy
* handle again. */
for (;;) {
do {
CURLMsg *msg;
msg = curl_multi_info_read(s->multi, &msgs_in_queue);
/* Quit when there are no more completions */
if (!msg)
break;
if (msg->msg == CURLMSG_DONE) {
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
if (msg->msg == CURLMSG_NONE)
break;
switch (msg->msg) {
case CURLMSG_DONE:
{
CURLState *state = NULL;
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE, (char**)&state);
/* ACBs for successful messages get completed in curl_read_cb */
if (msg->data.result != CURLE_OK) {
int i;
for (i = 0; i < CURL_NUM_ACB; i++) {
CURLAIOCB *acb = state->acb[i];
if (acb == NULL) {
continue;
}
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
state->acb[i] = NULL;
}
}
curl_clean_state(state);
break;
}
default:
msgs_in_queue = 0;
break;
}
}
}
static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
int running;
int r;
if (!s->s->multi) {
return;
}
do {
r = curl_multi_socket_action(s->s->multi, s->sock_fd, 0, &running);
} while(r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_read(void *arg)
{
CURLState *s = (CURLState *)arg;
curl_multi_do(arg);
curl_multi_check_completion(s->s);
}
static void curl_multi_timeout_do(void *arg)
{
#ifdef NEED_CURL_TIMER_CALLBACK
BDRVCURLState *s = (BDRVCURLState *)arg;
int running;
if (!s->multi) {
return;
}
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
curl_multi_check_completion(s);
#else
abort();
#endif
} while(msgs_in_queue);
}
static CURLState *curl_init_state(BDRVCURLState *s)
@@ -344,42 +280,33 @@ static CURLState *curl_init_state(BDRVCURLState *s)
break;
}
if (!state) {
qemu_aio_wait();
g_usleep(100);
curl_multi_do(s);
}
} while(!state);
if (!state->curl) {
state->curl = curl_easy_init();
if (!state->curl) {
return NULL;
}
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION,
(void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
if (state->curl)
goto has_curl;
/* Restrict supported protocols to avoid security issues in the more
* obscure protocols. For example, do not allow POP3/SMTP/IMAP see
* CVE-2013-0249.
*
* Restricting protocols is only supported from 7.19.4 upwards.
*/
#if LIBCURL_VERSION_NUM >= 0x071304
curl_easy_setopt(state->curl, CURLOPT_PROTOCOLS, PROTOCOLS);
curl_easy_setopt(state->curl, CURLOPT_REDIR_PROTOCOLS, PROTOCOLS);
#endif
state->curl = curl_easy_init();
if (!state->curl)
return NULL;
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
curl_easy_setopt(state->curl, CURLOPT_TIMEOUT, 5);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_WRITEDATA, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_PRIVATE, (void *)state);
curl_easy_setopt(state->curl, CURLOPT_AUTOREFERER, 1);
curl_easy_setopt(state->curl, CURLOPT_FOLLOWLOCATION, 1);
curl_easy_setopt(state->curl, CURLOPT_NOSIGNAL, 1);
curl_easy_setopt(state->curl, CURLOPT_ERRORBUFFER, state->errmsg);
curl_easy_setopt(state->curl, CURLOPT_FAILONERROR, 1);
#ifdef DEBUG_VERBOSE
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
curl_easy_setopt(state->curl, CURLOPT_VERBOSE, 1);
#endif
}
has_curl:
state->s = s;
@@ -393,9 +320,11 @@ static void curl_clean_state(CURLState *s)
s->in_use = 0;
}
static void curl_parse_filename(const char *filename, QDict *options,
Error **errp)
static int curl_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVCURLState *s = bs->opaque;
CURLState *state = NULL;
double d;
#define RA_OPTSTR ":readahead="
char *file;
@@ -403,17 +332,19 @@ static void curl_parse_filename(const char *filename, QDict *options,
const char *ra_val;
int parse_state = 0;
static int inited = 0;
file = g_strdup(filename);
s->readahead_size = READ_AHEAD_SIZE;
/* Parse a trailing ":readahead=#:" param, if present. */
ra = file + strlen(file) - 1;
while (ra >= file) {
if (parse_state == 0) {
if (*ra == ':') {
if (*ra == ':')
parse_state++;
} else {
else
break;
}
} else if (parse_state == 1) {
if (*ra > '9' || *ra < '0') {
char *opt_start = ra - strlen(RA_OPTSTR) + 1;
@@ -422,71 +353,19 @@ static void curl_parse_filename(const char *filename, QDict *options,
ra_val = ra + 1;
ra -= strlen(RA_OPTSTR) - 1;
*ra = '\0';
qdict_put(options, "readahead", qstring_from_str(ra_val));
s->readahead_size = atoi(ra_val);
break;
} else {
break;
}
break;
}
}
ra--;
}
qdict_put(options, "url", qstring_from_str(file));
g_free(file);
}
static QemuOptsList runtime_opts = {
.name = "curl",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "url",
.type = QEMU_OPT_STRING,
.help = "URL to open",
},
{
.name = "readahead",
.type = QEMU_OPT_SIZE,
.help = "Readahead size",
},
{ /* end of list */ }
},
};
static int curl_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
{
BDRVCURLState *s = bs->opaque;
CURLState *state = NULL;
QemuOpts *opts;
Error *local_err = NULL;
const char *file;
double d;
static int inited = 0;
if (flags & BDRV_O_RDWR) {
error_setg(errp, "curl block device does not support writes");
return -EROFS;
}
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto out_noclean;
}
s->readahead_size = qemu_opt_get_size(opts, "readahead", READ_AHEAD_SIZE);
if ((s->readahead_size & 0x1ff) != 0) {
error_setg(errp, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512",
s->readahead_size);
goto out_noclean;
}
file = qemu_opt_get(opts, "url");
if (file == NULL) {
error_setg(errp, "curl block driver requires an 'url' option");
fprintf(stderr, "HTTP_READAHEAD_SIZE %zd is not a multiple of 512\n",
s->readahead_size);
goto out_noclean;
}
@@ -496,65 +375,64 @@ static int curl_open(BlockDriverState *bs, QDict *options, int flags,
}
DPRINTF("CURL: Opening %s\n", file);
s->url = g_strdup(file);
s->url = file;
state = curl_init_state(s);
if (!state)
goto out_noclean;
// Get file size
s->accept_range = false;
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 1);
curl_easy_setopt(state->curl, CURLOPT_HEADERFUNCTION,
curl_header_cb);
curl_easy_setopt(state->curl, CURLOPT_HEADERDATA, s);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_size_cb);
if (curl_easy_perform(state->curl))
goto out;
curl_easy_getinfo(state->curl, CURLINFO_CONTENT_LENGTH_DOWNLOAD, &d);
curl_easy_setopt(state->curl, CURLOPT_WRITEFUNCTION, (void *)curl_read_cb);
curl_easy_setopt(state->curl, CURLOPT_NOBODY, 0);
if (d)
s->len = (size_t)d;
else if(!s->len)
goto out;
if ((!strncasecmp(s->url, "http://", strlen("http://"))
|| !strncasecmp(s->url, "https://", strlen("https://")))
&& !s->accept_range) {
pstrcpy(state->errmsg, CURL_ERROR_SIZE,
"Server does not support 'range' (byte ranges).");
goto out;
}
DPRINTF("CURL: Size = %zd\n", s->len);
curl_clean_state(state);
curl_easy_cleanup(state->curl);
state->curl = NULL;
aio_timer_init(bdrv_get_aio_context(bs), &s->timer,
QEMU_CLOCK_REALTIME, SCALE_NS,
curl_multi_timeout_do, s);
// Now we know the file exists and its size, so let's
// initialize the multi interface!
s->multi = curl_multi_init();
curl_multi_setopt(s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb);
#ifdef NEED_CURL_TIMER_CALLBACK
curl_multi_setopt(s->multi, CURLMOPT_TIMERDATA, s);
curl_multi_setopt(s->multi, CURLMOPT_TIMERFUNCTION, curl_timer_cb);
#endif
curl_multi_setopt( s->multi, CURLMOPT_SOCKETDATA, s);
curl_multi_setopt( s->multi, CURLMOPT_SOCKETFUNCTION, curl_sock_cb );
curl_multi_do(s);
qemu_opts_del(opts);
return 0;
out:
error_setg(errp, "CURL: Error opening file: %s", state->errmsg);
fprintf(stderr, "CURL: Error opening file: %s\n", state->errmsg);
curl_easy_cleanup(state->curl);
state->curl = NULL;
out_noclean:
g_free(s->url);
qemu_opts_del(opts);
g_free(file);
return -EINVAL;
}
static int curl_aio_flush(void *opaque)
{
BDRVCURLState *s = opaque;
int i, j;
for (i=0; i < CURL_NUM_STATES; i++) {
for(j=0; j < CURL_NUM_ACB; j++) {
if (s->states[i].acb[j]) {
return 1;
}
}
}
return 0;
}
static void curl_aio_cancel(BlockDriverAIOCB *blockacb)
{
// Do we have to implement canceling? Seems to work without...
@@ -569,7 +447,6 @@ static const AIOCBInfo curl_aiocb_info = {
static void curl_readv_bh_cb(void *p)
{
CURLState *state;
int running;
CURLAIOCB *acb = p;
BDRVCURLState *s = acb->common.bs->opaque;
@@ -618,9 +495,8 @@ static void curl_readv_bh_cb(void *p)
curl_easy_setopt(state->curl, CURLOPT_RANGE, state->range);
curl_multi_add_handle(s->multi, state->curl);
curl_multi_do(s);
/* Tell curl it needs to kick things off */
curl_multi_socket_action(s->multi, CURL_SOCKET_TIMEOUT, 0, &running);
}
static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
@@ -636,6 +512,12 @@ static BlockDriverAIOCB *curl_aio_readv(BlockDriverState *bs,
acb->nb_sectors = nb_sectors;
acb->bh = qemu_bh_new(curl_readv_bh_cb, acb);
if (!acb->bh) {
DPRINTF("CURL: qemu_bh_new failed\n");
return NULL;
}
qemu_bh_schedule(acb->bh);
return &acb->common;
}
@@ -660,9 +542,6 @@ static void curl_close(BlockDriverState *bs)
}
if (s->multi)
curl_multi_cleanup(s->multi);
timer_del(&s->timer);
g_free(s->url);
}
@@ -673,68 +552,63 @@ static int64_t curl_getlength(BlockDriverState *bs)
}
static BlockDriver bdrv_http = {
.format_name = "http",
.protocol_name = "http",
.format_name = "http",
.protocol_name = "http",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_https = {
.format_name = "https",
.protocol_name = "https",
.format_name = "https",
.protocol_name = "https",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_ftp = {
.format_name = "ftp",
.protocol_name = "ftp",
.format_name = "ftp",
.protocol_name = "ftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_ftps = {
.format_name = "ftps",
.protocol_name = "ftps",
.format_name = "ftps",
.protocol_name = "ftps",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_aio_readv = curl_aio_readv,
};
static BlockDriver bdrv_tftp = {
.format_name = "tftp",
.protocol_name = "tftp",
.format_name = "tftp",
.protocol_name = "tftp",
.instance_size = sizeof(BDRVCURLState),
.bdrv_parse_filename = curl_parse_filename,
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.instance_size = sizeof(BDRVCURLState),
.bdrv_file_open = curl_open,
.bdrv_close = curl_close,
.bdrv_getlength = curl_getlength,
.bdrv_aio_readv = curl_aio_readv,
.bdrv_aio_readv = curl_aio_readv,
};
static void curl_block_init(void)

566
block/dictzip.c Normal file
View File

@@ -0,0 +1,566 @@
/*
* DictZip Block driver for dictzip enabled gzip files
*
* Use the "dictzip" tool from the "dictd" package to create gzip files that
* contain the extra DictZip headers.
*
* dictzip(1) is a compression program which creates compressed files in the
* gzip format (see RFC 1952). However, unlike gzip(1), dictzip(1) compresses
* the file in pieces and stores an index to the pieces in the gzip header.
* This allows random access to the file at the granularity of the compressed
* pieces (currently about 64kB) while maintaining good compression ratios
* (within 5% of the expected ratio for dictionary data).
* dictd(8) uses files stored in this format.
*
* For details on DictZip see http://dict.org/.
*
* Copyright (c) 2009 Alexander Graf <agraf@suse.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block_int.h"
#include <zlib.h>
// #define DEBUG
#ifdef DEBUG
#define dprintf(fmt, ...) do { printf("dzip: " fmt, ## __VA_ARGS__); } while (0)
#else
#define dprintf(fmt, ...) do { } while (0)
#endif
#define SECTOR_SIZE 512
#define Z_STREAM_COUNT 4
#define CACHE_COUNT 20
/* magic values */
#define GZ_MAGIC1 0x1f
#define GZ_MAGIC2 0x8b
#define DZ_MAGIC1 'R'
#define DZ_MAGIC2 'A'
#define GZ_FEXTRA 0x04 /* Optional field (random access index) */
#define GZ_FNAME 0x08 /* Original name */
#define GZ_COMMENT 0x10 /* Zero-terminated, human-readable comment */
#define GZ_FHCRC 0x02 /* Header CRC16 */
/* offsets */
#define GZ_ID 0 /* GZ_MAGIC (16bit) */
#define GZ_FLG 3 /* FLaGs (see above) */
#define GZ_XLEN 10 /* eXtra LENgth (16bit) */
#define GZ_SI 12 /* Subfield ID (16bit) */
#define GZ_VERSION 16 /* Version for subfield format */
#define GZ_CHUNKSIZE 18 /* Chunk size (16bit) */
#define GZ_CHUNKCNT 20 /* Number of chunks (16bit) */
#define GZ_RNDDATA 22 /* Random access data (16bit) */
#define GZ_99_CHUNKSIZE 18 /* Chunk size (32bit) */
#define GZ_99_CHUNKCNT 22 /* Number of chunks (32bit) */
#define GZ_99_FILESIZE 26 /* Size of unpacked file (64bit) */
#define GZ_99_RNDDATA 34 /* Random access data (32bit) */
struct BDRVDictZipState;
typedef struct DictZipAIOCB {
BlockDriverAIOCB common;
struct BDRVDictZipState *s;
QEMUIOVector *qiov; /* QIOV of the original request */
QEMUIOVector *qiov_gz; /* QIOV of the gz subrequest */
QEMUBH *bh; /* BH for cache */
z_stream *zStream; /* stream to use for decoding */
int zStream_id; /* stream id of the above pointer */
size_t start; /* offset into the uncompressed file */
size_t len; /* uncompressed bytes to read */
uint8_t *gzipped; /* the gzipped data */
uint8_t *buf; /* cached result */
size_t gz_len; /* amount of gzip data */
size_t gz_start; /* uncompressed starting point of gzip data */
uint64_t offset; /* offset for "start" into the uncompressed chunk */
int chunks_len; /* amount of uncompressed data in all gzip data */
} DictZipAIOCB;
typedef struct dict_cache {
size_t start;
size_t len;
uint8_t *buf;
} DictCache;
typedef struct BDRVDictZipState {
BlockDriverState *hd;
z_stream zStream[Z_STREAM_COUNT];
DictCache cache[CACHE_COUNT];
int cache_index;
uint8_t stream_in_use;
uint64_t chunk_len;
uint32_t chunk_cnt;
uint16_t *chunks;
uint32_t *chunks32;
uint64_t *offsets;
int64_t file_len;
} BDRVDictZipState;
static int dictzip_probe(const uint8_t *buf, int buf_size, const char *filename)
{
if (buf_size < 2)
return 0;
/* We match on every gzip file */
if ((buf[0] == GZ_MAGIC1) && (buf[1] == GZ_MAGIC2))
return 100;
return 0;
}
static int start_zStream(z_stream *zStream)
{
zStream->zalloc = NULL;
zStream->zfree = NULL;
zStream->opaque = NULL;
zStream->next_in = 0;
zStream->avail_in = 0;
zStream->next_out = NULL;
zStream->avail_out = 0;
return inflateInit2( zStream, -15 );
}
static int dictzip_open(BlockDriverState *bs, const char *filename, int flags)
{
BDRVDictZipState *s = bs->opaque;
const char *err = "Unknown (read error?)";
uint8_t magic[2];
char buf[100];
uint8_t header_flags;
uint16_t chunk_len16;
uint16_t chunk_cnt16;
uint16_t header_ver;
uint16_t tmp_short;
uint64_t offset;
int chunks_len;
int headerLength = GZ_XLEN - 1;
int rnd_offs;
int ret;
int i;
const char *fname = filename;
if (!strncmp(filename, "dzip://", 7))
fname += 7;
else if (!strncmp(filename, "dzip:", 5))
fname += 5;
ret = bdrv_file_open(&s->hd, fname, flags);
if (ret < 0)
return ret;
/* initialize zlib streams */
for (i = 0; i < Z_STREAM_COUNT; i++) {
if (start_zStream( &s->zStream[i] ) != Z_OK) {
err = s->zStream[i].msg;
goto fail;
}
}
/* gzip header */
if (bdrv_pread(s->hd, GZ_ID, &magic, sizeof(magic)) != sizeof(magic))
goto fail;
if (!((magic[0] == GZ_MAGIC1) && (magic[1] == GZ_MAGIC2))) {
err = "No gzip file";
goto fail;
}
/* dzip header */
if (bdrv_pread(s->hd, GZ_FLG, &header_flags, 1) != 1)
goto fail;
if (!(header_flags & GZ_FEXTRA)) {
err = "Not a dictzip file (wrong flags)";
goto fail;
}
/* extra length */
if (bdrv_pread(s->hd, GZ_XLEN, &tmp_short, 2) != 2)
goto fail;
headerLength += le16_to_cpu(tmp_short) + 2;
/* DictZip magic */
if (bdrv_pread(s->hd, GZ_SI, &magic, 2) != 2)
goto fail;
if (magic[0] != DZ_MAGIC1 || magic[1] != DZ_MAGIC2) {
err = "Not a dictzip file (missing extra magic)";
goto fail;
}
/* DictZip version */
if (bdrv_pread(s->hd, GZ_VERSION, &header_ver, 2) != 2)
goto fail;
header_ver = le16_to_cpu(header_ver);
switch (header_ver) {
case 1: /* Normal DictZip */
/* number of chunks */
if (bdrv_pread(s->hd, GZ_CHUNKSIZE, &chunk_len16, 2) != 2)
goto fail;
s->chunk_len = le16_to_cpu(chunk_len16);
/* chunk count */
if (bdrv_pread(s->hd, GZ_CHUNKCNT, &chunk_cnt16, 2) != 2)
goto fail;
s->chunk_cnt = le16_to_cpu(chunk_cnt16);
chunks_len = sizeof(short) * s->chunk_cnt;
rnd_offs = GZ_RNDDATA;
break;
case 99: /* Special Alex pigz version */
/* number of chunks */
if (bdrv_pread(s->hd, GZ_99_CHUNKSIZE, &s->chunk_len, 4) != 4)
goto fail;
dprintf("chunk len [%#x] = %d\n", GZ_99_CHUNKSIZE, s->chunk_len);
s->chunk_len = le32_to_cpu(s->chunk_len);
/* chunk count */
if (bdrv_pread(s->hd, GZ_99_CHUNKCNT, &s->chunk_cnt, 4) != 4)
goto fail;
s->chunk_cnt = le32_to_cpu(s->chunk_cnt);
dprintf("chunk len | count = %d | %d\n", s->chunk_len, s->chunk_cnt);
/* file size */
if (bdrv_pread(s->hd, GZ_99_FILESIZE, &s->file_len, 8) != 8)
goto fail;
s->file_len = le64_to_cpu(s->file_len);
chunks_len = sizeof(int) * s->chunk_cnt;
rnd_offs = GZ_99_RNDDATA;
break;
default:
err = "Invalid DictZip version";
goto fail;
}
/* random access data */
s->chunks = g_malloc(chunks_len);
if (header_ver == 99)
s->chunks32 = (uint32_t *)s->chunks;
if (bdrv_pread(s->hd, rnd_offs, s->chunks, chunks_len) != chunks_len)
goto fail;
/* orig filename */
if (header_flags & GZ_FNAME) {
if (bdrv_pread(s->hd, headerLength + 1, buf, sizeof(buf)) != sizeof(buf))
goto fail;
buf[sizeof(buf) - 1] = '\0';
headerLength += strlen(buf) + 1;
if (strlen(buf) == sizeof(buf))
goto fail;
dprintf("filename: %s\n", buf);
}
/* comment field */
if (header_flags & GZ_COMMENT) {
if (bdrv_pread(s->hd, headerLength, buf, sizeof(buf)) != sizeof(buf))
goto fail;
buf[sizeof(buf) - 1] = '\0';
headerLength += strlen(buf) + 1;
if (strlen(buf) == sizeof(buf))
goto fail;
dprintf("comment: %s\n", buf);
}
if (header_flags & GZ_FHCRC)
headerLength += 2;
/* uncompressed file length*/
if (!s->file_len) {
uint32_t file_len;
if (bdrv_pread(s->hd, bdrv_getlength(s->hd) - 4, &file_len, 4) != 4)
goto fail;
s->file_len = le32_to_cpu(file_len);
}
/* compute offsets */
s->offsets = g_malloc(sizeof( *s->offsets ) * s->chunk_cnt);
for (offset = headerLength + 1, i = 0; i < s->chunk_cnt; i++) {
s->offsets[i] = offset;
switch (header_ver) {
case 1:
offset += s->chunks[i];
break;
case 99:
offset += s->chunks32[i];
break;
}
dprintf("chunk %#x - %#x = offset %#x -> %#x\n", i * s->chunk_len, (i+1) * s->chunk_len, s->offsets[i], offset);
}
return 0;
fail:
fprintf(stderr, "DictZip: Error opening file: %s\n", err);
bdrv_delete(s->hd);
if (s->chunks)
g_free(s->chunks);
return -EINVAL;
}
/* This callback gets invoked when we have the result in cache already */
static void dictzip_cache_cb(void *opaque)
{
DictZipAIOCB *acb = (DictZipAIOCB *)opaque;
qemu_iovec_from_buf(acb->qiov, 0, acb->buf, acb->len);
acb->common.cb(acb->common.opaque, 0);
qemu_bh_delete(acb->bh);
qemu_aio_release(acb);
}
/* This callback gets invoked by the underlying block reader when we have
* all compressed data. We uncompress in here. */
static void dictzip_read_cb(void *opaque, int ret)
{
DictZipAIOCB *acb = (DictZipAIOCB *)opaque;
struct BDRVDictZipState *s = acb->s;
uint8_t *buf;
DictCache *cache;
int r;
buf = g_malloc(acb->chunks_len);
/* uncompress the chunk */
acb->zStream->next_in = acb->gzipped;
acb->zStream->avail_in = acb->gz_len;
acb->zStream->next_out = buf;
acb->zStream->avail_out = acb->chunks_len;
r = inflate( acb->zStream, Z_PARTIAL_FLUSH );
if ( (r != Z_OK) && (r != Z_STREAM_END) )
fprintf(stderr, "Error inflating: [%d] %s\n", r, acb->zStream->msg);
if ( r == Z_STREAM_END )
inflateReset(acb->zStream);
dprintf("inflating [%d] left: %d | %d bytes\n", r, acb->zStream->avail_in, acb->zStream->avail_out);
s->stream_in_use &= ~(1 << acb->zStream_id);
/* nofity the caller */
qemu_iovec_from_buf(acb->qiov, 0, buf + acb->offset, acb->len);
acb->common.cb(acb->common.opaque, 0);
/* fill the cache */
cache = &s->cache[s->cache_index];
s->cache_index++;
if (s->cache_index == CACHE_COUNT)
s->cache_index = 0;
cache->len = 0;
if (cache->buf)
g_free(cache->buf);
cache->start = acb->gz_start;
cache->buf = buf;
cache->len = acb->chunks_len;
/* free occupied ressources */
g_free(acb->qiov_gz);
qemu_aio_release(acb);
}
static void dictzip_aio_cancel(BlockDriverAIOCB *blockacb)
{
}
static const AIOCBInfo dictzip_aiocb_info = {
.aiocb_size = sizeof(DictZipAIOCB),
.cancel = dictzip_aio_cancel,
};
/* This is where we get a request from a caller to read something */
static BlockDriverAIOCB *dictzip_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
BDRVDictZipState *s = bs->opaque;
DictZipAIOCB *acb;
QEMUIOVector *qiov_gz;
struct iovec *iov;
uint8_t *buf;
size_t start = sector_num * SECTOR_SIZE;
size_t len = nb_sectors * SECTOR_SIZE;
size_t end = start + len;
size_t gz_start;
size_t gz_len;
int64_t gz_sector_num;
int gz_nb_sectors;
int first_chunk, last_chunk;
int first_offset;
int i;
acb = qemu_aio_get(&dictzip_aiocb_info, bs, cb, opaque);
if (!acb)
return NULL;
/* Search Cache */
for (i = 0; i < CACHE_COUNT; i++) {
if (!s->cache[i].len)
continue;
if ((start >= s->cache[i].start) &&
(end <= (s->cache[i].start + s->cache[i].len))) {
acb->buf = s->cache[i].buf + (start - s->cache[i].start);
acb->len = len;
acb->qiov = qiov;
acb->bh = qemu_bh_new(dictzip_cache_cb, acb);
qemu_bh_schedule(acb->bh);
return &acb->common;
}
}
/* No cache, so let's decode */
do {
for (i = 0; i < Z_STREAM_COUNT; i++) {
if (!(s->stream_in_use & (1 << i))) {
s->stream_in_use |= (1 << i);
acb->zStream_id = i;
acb->zStream = &s->zStream[i];
break;
}
}
} while(!acb->zStream);
/* We need to read these chunks */
first_chunk = start / s->chunk_len;
first_offset = start - first_chunk * s->chunk_len;
last_chunk = end / s->chunk_len;
gz_start = s->offsets[first_chunk];
gz_len = 0;
for (i = first_chunk; i <= last_chunk; i++) {
if (s->chunks32)
gz_len += s->chunks32[i];
else
gz_len += s->chunks[i];
}
gz_sector_num = gz_start / SECTOR_SIZE;
gz_nb_sectors = (gz_len / SECTOR_SIZE);
/* account for tail and heads */
while ((gz_start + gz_len) > ((gz_sector_num + gz_nb_sectors) * SECTOR_SIZE))
gz_nb_sectors++;
/* Allocate qiov, iov and buf in one chunk so we only need to free qiov */
qiov_gz = g_malloc0(sizeof(QEMUIOVector) + sizeof(struct iovec) +
(gz_nb_sectors * SECTOR_SIZE));
iov = (struct iovec *)(((char *)qiov_gz) + sizeof(QEMUIOVector));
buf = ((uint8_t *)iov) + sizeof(struct iovec *);
/* Kick off the read by the backing file, so we can start decompressing */
iov->iov_base = (void *)buf;
iov->iov_len = gz_nb_sectors * 512;
qemu_iovec_init_external(qiov_gz, iov, 1);
dprintf("read %d - %d => %d - %d\n", start, end, gz_start, gz_start + gz_len);
acb->s = s;
acb->qiov = qiov;
acb->qiov_gz = qiov_gz;
acb->start = start;
acb->len = len;
acb->gzipped = buf + (gz_start % SECTOR_SIZE);
acb->gz_len = gz_len;
acb->gz_start = first_chunk * s->chunk_len;
acb->offset = first_offset;
acb->chunks_len = (last_chunk - first_chunk + 1) * s->chunk_len;
return bdrv_aio_readv(s->hd, gz_sector_num, qiov_gz, gz_nb_sectors,
dictzip_read_cb, acb);
}
static void dictzip_close(BlockDriverState *bs)
{
BDRVDictZipState *s = bs->opaque;
int i;
for (i = 0; i < CACHE_COUNT; i++) {
if (!s->cache[i].len)
continue;
g_free(s->cache[i].buf);
}
for (i = 0; i < Z_STREAM_COUNT; i++) {
inflateEnd(&s->zStream[i]);
}
if (s->chunks)
g_free(s->chunks);
if (s->offsets)
g_free(s->offsets);
dprintf("Close\n");
}
static int64_t dictzip_getlength(BlockDriverState *bs)
{
BDRVDictZipState *s = bs->opaque;
dprintf("getlength -> %ld\n", s->file_len);
return s->file_len;
}
static BlockDriver bdrv_dictzip = {
.format_name = "dzip",
.protocol_name = "dzip",
.instance_size = sizeof(BDRVDictZipState),
.bdrv_file_open = dictzip_open,
.bdrv_close = dictzip_close,
.bdrv_getlength = dictzip_getlength,
.bdrv_probe = dictzip_probe,
.bdrv_aio_readv = dictzip_aio_readv,
};
static void dictzip_block_init(void)
{
bdrv_register(&bdrv_dictzip);
}
block_init(dictzip_block_init);

View File

@@ -22,19 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/bswap.h"
#include "qemu/module.h"
#include "block_int.h"
#include "bswap.h"
#include "module.h"
#include <zlib.h>
enum {
/* Limit chunk sizes to prevent unreasonable amounts of memory being used
* or truncating when converting to 32-bit types
*/
DMG_LENGTHS_MAX = 64 * 1024 * 1024, /* 64 MB */
DMG_SECTORCOUNTS_MAX = DMG_LENGTHS_MAX / 512,
};
typedef struct BDRVDMGState {
CoMutex lock;
/* each chunk contains a certain number of sectors,
@@ -59,87 +51,35 @@ typedef struct BDRVDMGState {
static int dmg_probe(const uint8_t *buf, int buf_size, const char *filename)
{
int len;
if (!filename) {
return 0;
}
len = strlen(filename);
if (len > 4 && !strcmp(filename + len - 4, ".dmg")) {
return 2;
}
int len=strlen(filename);
if(len>4 && !strcmp(filename+len-4,".dmg"))
return 2;
return 0;
}
static int read_uint64(BlockDriverState *bs, int64_t offset, uint64_t *result)
static off_t read_off(BlockDriverState *bs, int64_t offset)
{
uint64_t buffer;
int ret;
ret = bdrv_pread(bs->file, offset, &buffer, 8);
if (ret < 0) {
return ret;
}
*result = be64_to_cpu(buffer);
return 0;
uint64_t buffer;
if (bdrv_pread(bs->file, offset, &buffer, 8) < 8)
return 0;
return be64_to_cpu(buffer);
}
static int read_uint32(BlockDriverState *bs, int64_t offset, uint32_t *result)
static off_t read_uint32(BlockDriverState *bs, int64_t offset)
{
uint32_t buffer;
int ret;
ret = bdrv_pread(bs->file, offset, &buffer, 4);
if (ret < 0) {
return ret;
}
*result = be32_to_cpu(buffer);
return 0;
uint32_t buffer;
if (bdrv_pread(bs->file, offset, &buffer, 4) < 4)
return 0;
return be32_to_cpu(buffer);
}
/* Increase max chunk sizes, if necessary. This function is used to calculate
* the buffer sizes needed for compressed/uncompressed chunk I/O.
*/
static void update_max_chunk_size(BDRVDMGState *s, uint32_t chunk,
uint32_t *max_compressed_size,
uint32_t *max_sectors_per_chunk)
{
uint32_t compressed_size = 0;
uint32_t uncompressed_sectors = 0;
switch (s->types[chunk]) {
case 0x80000005: /* zlib compressed */
compressed_size = s->lengths[chunk];
uncompressed_sectors = s->sectorcounts[chunk];
break;
case 1: /* copy */
uncompressed_sectors = (s->lengths[chunk] + 511) / 512;
break;
case 2: /* zero */
uncompressed_sectors = s->sectorcounts[chunk];
break;
}
if (compressed_size > *max_compressed_size) {
*max_compressed_size = compressed_size;
}
if (uncompressed_sectors > *max_sectors_per_chunk) {
*max_sectors_per_chunk = uncompressed_sectors;
}
}
static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int dmg_open(BlockDriverState *bs, int flags)
{
BDRVDMGState *s = bs->opaque;
uint64_t info_begin, info_end, last_in_offset, last_out_offset;
uint32_t count, tmp;
uint32_t max_compressed_size = 1, max_sectors_per_chunk = 1, i;
off_t info_begin,info_end,last_in_offset,last_out_offset;
uint32_t count;
uint32_t max_compressed_size=1,max_sectors_per_chunk=1,i;
int64_t offset;
int ret;
bs->read_only = 1;
s->n_chunks = 0;
@@ -148,32 +88,21 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
/* read offset of info blocks */
offset = bdrv_getlength(bs->file);
if (offset < 0) {
ret = offset;
goto fail;
}
offset -= 0x1d8;
ret = read_uint64(bs, offset, &info_begin);
if (ret < 0) {
goto fail;
} else if (info_begin == 0) {
ret = -EINVAL;
info_begin = read_off(bs, offset);
if (info_begin == 0) {
goto fail;
}
if (read_uint32(bs, info_begin) != 0x100) {
goto fail;
}
ret = read_uint32(bs, info_begin, &tmp);
if (ret < 0) {
goto fail;
} else if (tmp != 0x100) {
ret = -EINVAL;
goto fail;
}
ret = read_uint32(bs, info_begin + 4, &count);
if (ret < 0) {
goto fail;
} else if (count == 0) {
ret = -EINVAL;
count = read_uint32(bs, info_begin + 4);
if (count == 0) {
goto fail;
}
info_end = info_begin + count;
@@ -185,205 +114,154 @@ static int dmg_open(BlockDriverState *bs, QDict *options, int flags,
while (offset < info_end) {
uint32_t type;
ret = read_uint32(bs, offset, &count);
if (ret < 0) {
goto fail;
} else if (count == 0) {
ret = -EINVAL;
goto fail;
}
count = read_uint32(bs, offset);
if(count==0)
goto fail;
offset += 4;
ret = read_uint32(bs, offset, &type);
if (ret < 0) {
goto fail;
}
if (type == 0x6d697368 && count >= 244) {
size_t new_size;
uint32_t chunk_count;
type = read_uint32(bs, offset);
if (type == 0x6d697368 && count >= 244) {
int new_size, chunk_count;
offset += 4;
offset += 200;
chunk_count = (count - 204) / 40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size / 2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
chunk_count = (count-204)/40;
new_size = sizeof(uint64_t) * (s->n_chunks + chunk_count);
s->types = g_realloc(s->types, new_size/2);
s->offsets = g_realloc(s->offsets, new_size);
s->lengths = g_realloc(s->lengths, new_size);
s->sectors = g_realloc(s->sectors, new_size);
s->sectorcounts = g_realloc(s->sectorcounts, new_size);
for (i = s->n_chunks; i < s->n_chunks + chunk_count; i++) {
ret = read_uint32(bs, offset, &s->types[i]);
if (ret < 0) {
goto fail;
}
offset += 4;
if (s->types[i] != 0x80000005 && s->types[i] != 1 &&
s->types[i] != 2) {
if (s->types[i] == 0xffffffff && i > 0) {
last_in_offset = s->offsets[i - 1] + s->lengths[i - 1];
last_out_offset = s->sectors[i - 1] +
s->sectorcounts[i - 1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
for(i=s->n_chunks;i<s->n_chunks+chunk_count;i++) {
s->types[i] = read_uint32(bs, offset);
offset += 4;
if(s->types[i]!=0x80000005 && s->types[i]!=1 && s->types[i]!=2) {
if(s->types[i]==0xffffffff) {
last_in_offset = s->offsets[i-1]+s->lengths[i-1];
last_out_offset = s->sectors[i-1]+s->sectorcounts[i-1];
}
chunk_count--;
i--;
offset += 36;
continue;
}
offset += 4;
ret = read_uint64(bs, offset, &s->sectors[i]);
if (ret < 0) {
goto fail;
}
s->sectors[i] += last_out_offset;
offset += 8;
s->sectors[i] = last_out_offset+read_off(bs, offset);
offset += 8;
ret = read_uint64(bs, offset, &s->sectorcounts[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
s->sectorcounts[i] = read_off(bs, offset);
offset += 8;
if (s->sectorcounts[i] > DMG_SECTORCOUNTS_MAX) {
error_report("sector count %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->sectorcounts[i], i, DMG_SECTORCOUNTS_MAX);
ret = -EINVAL;
goto fail;
}
s->offsets[i] = last_in_offset+read_off(bs, offset);
offset += 8;
ret = read_uint64(bs, offset, &s->offsets[i]);
if (ret < 0) {
goto fail;
}
s->offsets[i] += last_in_offset;
offset += 8;
s->lengths[i] = read_off(bs, offset);
offset += 8;
ret = read_uint64(bs, offset, &s->lengths[i]);
if (ret < 0) {
goto fail;
}
offset += 8;
if (s->lengths[i] > DMG_LENGTHS_MAX) {
error_report("length %" PRIu64 " for chunk %" PRIu32
" is larger than max (%u)",
s->lengths[i], i, DMG_LENGTHS_MAX);
ret = -EINVAL;
goto fail;
}
update_max_chunk_size(s, i, &max_compressed_size,
&max_sectors_per_chunk);
}
s->n_chunks += chunk_count;
}
if(s->lengths[i]>max_compressed_size)
max_compressed_size = s->lengths[i];
if(s->sectorcounts[i]>max_sectors_per_chunk)
max_sectors_per_chunk = s->sectorcounts[i];
}
s->n_chunks+=chunk_count;
}
}
/* initialize zlib engine */
s->compressed_chunk = g_malloc(max_compressed_size + 1);
s->uncompressed_chunk = g_malloc(512 * max_sectors_per_chunk);
if (inflateInit(&s->zstream) != Z_OK) {
ret = -EINVAL;
goto fail;
}
s->compressed_chunk = g_malloc(max_compressed_size+1);
s->uncompressed_chunk = g_malloc(512*max_sectors_per_chunk);
if(inflateInit(&s->zstream) != Z_OK)
goto fail;
s->current_chunk = s->n_chunks;
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->types);
g_free(s->offsets);
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
g_free(s->compressed_chunk);
g_free(s->uncompressed_chunk);
return ret;
return -1;
}
static inline int is_sector_in_chunk(BDRVDMGState* s,
uint32_t chunk_num, uint64_t sector_num)
uint32_t chunk_num,int sector_num)
{
if (chunk_num >= s->n_chunks || s->sectors[chunk_num] > sector_num ||
s->sectors[chunk_num] + s->sectorcounts[chunk_num] <= sector_num) {
return 0;
} else {
return -1;
}
if(chunk_num>=s->n_chunks || s->sectors[chunk_num]>sector_num ||
s->sectors[chunk_num]+s->sectorcounts[chunk_num]<=sector_num)
return 0;
else
return -1;
}
static inline uint32_t search_chunk(BDRVDMGState *s, uint64_t sector_num)
static inline uint32_t search_chunk(BDRVDMGState* s,int sector_num)
{
/* binary search */
uint32_t chunk1 = 0, chunk2 = s->n_chunks, chunk3;
while (chunk1 != chunk2) {
chunk3 = (chunk1 + chunk2) / 2;
if (s->sectors[chunk3] > sector_num) {
chunk2 = chunk3;
} else if (s->sectors[chunk3] + s->sectorcounts[chunk3] > sector_num) {
return chunk3;
} else {
chunk1 = chunk3;
}
uint32_t chunk1=0,chunk2=s->n_chunks,chunk3;
while(chunk1!=chunk2) {
chunk3 = (chunk1+chunk2)/2;
if(s->sectors[chunk3]>sector_num)
chunk2 = chunk3;
else if(s->sectors[chunk3]+s->sectorcounts[chunk3]>sector_num)
return chunk3;
else
chunk1 = chunk3;
}
return s->n_chunks; /* error */
}
static inline int dmg_read_chunk(BlockDriverState *bs, uint64_t sector_num)
static inline int dmg_read_chunk(BlockDriverState *bs, int sector_num)
{
BDRVDMGState *s = bs->opaque;
if (!is_sector_in_chunk(s, s->current_chunk, sector_num)) {
int ret;
uint32_t chunk = search_chunk(s, sector_num);
if(!is_sector_in_chunk(s,s->current_chunk,sector_num)) {
int ret;
uint32_t chunk = search_chunk(s,sector_num);
if (chunk >= s->n_chunks) {
return -1;
}
if(chunk>=s->n_chunks)
return -1;
s->current_chunk = s->n_chunks;
switch (s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
/* we need to buffer, because only the chunk as whole can be
* inflated. */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->compressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
s->current_chunk = s->n_chunks;
switch(s->types[chunk]) {
case 0x80000005: { /* zlib compressed */
int i;
s->zstream.next_in = s->compressed_chunk;
s->zstream.avail_in = s->lengths[chunk];
s->zstream.next_out = s->uncompressed_chunk;
s->zstream.avail_out = 512 * s->sectorcounts[chunk];
ret = inflateReset(&s->zstream);
if (ret != Z_OK) {
return -1;
}
ret = inflate(&s->zstream, Z_FINISH);
if (ret != Z_STREAM_END ||
s->zstream.total_out != 512 * s->sectorcounts[chunk]) {
return -1;
}
break; }
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
/* we need to buffer, because only the chunk as whole can be
* inflated. */
i=0;
do {
ret = bdrv_pread(bs->file, s->offsets[chunk] + i,
s->compressed_chunk+i, s->lengths[chunk]-i);
if(ret<0 && errno==EINTR)
ret=0;
i+=ret;
} while(ret>=0 && ret+i<s->lengths[chunk]);
if (ret != s->lengths[chunk])
return -1;
s->zstream.next_in = s->compressed_chunk;
s->zstream.avail_in = s->lengths[chunk];
s->zstream.next_out = s->uncompressed_chunk;
s->zstream.avail_out = 512*s->sectorcounts[chunk];
ret = inflateReset(&s->zstream);
if(ret != Z_OK)
return -1;
ret = inflate(&s->zstream, Z_FINISH);
if(ret != Z_STREAM_END || s->zstream.total_out != 512*s->sectorcounts[chunk])
return -1;
break; }
case 1: /* copy */
ret = bdrv_pread(bs->file, s->offsets[chunk],
s->uncompressed_chunk, s->lengths[chunk]);
if (ret != s->lengths[chunk]) {
return -1;
}
break;
case 2: /* zero */
memset(s->uncompressed_chunk, 0, 512 * s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
if (ret != s->lengths[chunk])
return -1;
break;
case 2: /* zero */
memset(s->uncompressed_chunk, 0, 512*s->sectorcounts[chunk]);
break;
}
s->current_chunk = chunk;
}
return 0;
}
@@ -394,14 +272,12 @@ static int dmg_read(BlockDriverState *bs, int64_t sector_num,
BDRVDMGState *s = bs->opaque;
int i;
for (i = 0; i < nb_sectors; i++) {
uint32_t sector_offset_in_chunk;
if (dmg_read_chunk(bs, sector_num + i) != 0) {
return -1;
}
sector_offset_in_chunk = sector_num + i - s->sectors[s->current_chunk];
memcpy(buf + i * 512,
s->uncompressed_chunk + sector_offset_in_chunk * 512, 512);
for(i=0;i<nb_sectors;i++) {
uint32_t sector_offset_in_chunk;
if(dmg_read_chunk(bs, sector_num+i) != 0)
return -1;
sector_offset_in_chunk = sector_num+i-s->sectors[s->current_chunk];
memcpy(buf+i*512,s->uncompressed_chunk+sector_offset_in_chunk*512,512);
}
return 0;
}
@@ -420,25 +296,25 @@ static coroutine_fn int dmg_co_read(BlockDriverState *bs, int64_t sector_num,
static void dmg_close(BlockDriverState *bs)
{
BDRVDMGState *s = bs->opaque;
g_free(s->types);
g_free(s->offsets);
g_free(s->lengths);
g_free(s->sectors);
g_free(s->sectorcounts);
g_free(s->compressed_chunk);
g_free(s->uncompressed_chunk);
if(s->n_chunks>0) {
free(s->types);
free(s->offsets);
free(s->lengths);
free(s->sectors);
free(s->sectorcounts);
}
free(s->compressed_chunk);
free(s->uncompressed_chunk);
inflateEnd(&s->zstream);
}
static BlockDriver bdrv_dmg = {
.format_name = "dmg",
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
.format_name = "dmg",
.instance_size = sizeof(BDRVDMGState),
.bdrv_probe = dmg_probe,
.bdrv_open = dmg_open,
.bdrv_read = dmg_co_read,
.bdrv_close = dmg_close,
};
static void bdrv_dmg_init(void)

View File

@@ -3,26 +3,43 @@
*
* Copyright (C) 2012 Bharata B Rao <bharata@linux.vnet.ibm.com>
*
* This work is licensed under the terms of the GNU GPL, version 2 or later.
* See the COPYING file in the top-level directory.
* Pipe handling mechanism in AIO implementation is derived from
* block/rbd.c. Hence,
*
* Copyright (C) 2010-2011 Christian Brunner <chb@muc.de>,
* Josh Durgin <josh.durgin@dreamhost.com>
*
* This work is licensed under the terms of the GNU GPL, version 2. See
* the COPYING file in the top-level directory.
*
* Contributions after 2012-01-13 are licensed under the terms of the
* GNU GPL, version 2 or (at your option) any later version.
*/
#include <glusterfs/api/glfs.h>
#include "block/block_int.h"
#include "qemu/uri.h"
#include "block_int.h"
#include "qemu_socket.h"
#include "uri.h"
typedef struct GlusterAIOCB {
BlockDriverAIOCB common;
int64_t size;
int ret;
bool *finished;
QEMUBH *bh;
Coroutine *coroutine;
} GlusterAIOCB;
typedef struct BDRVGlusterState {
struct glfs *glfs;
int fds[2];
struct glfs_fd *fd;
int qemu_aio_count;
int event_reader_pos;
GlusterAIOCB *event_acb;
} BDRVGlusterState;
#define GLUSTER_FD_READ 0
#define GLUSTER_FD_WRITE 1
typedef struct GlusterConf {
char *server;
int port;
@@ -33,13 +50,11 @@ typedef struct GlusterConf {
static void qemu_gluster_gconf_free(GlusterConf *gconf)
{
if (gconf) {
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
g_free(gconf->server);
g_free(gconf->volname);
g_free(gconf->image);
g_free(gconf->transport);
g_free(gconf);
}
static int parse_volume_options(GlusterConf *gconf, char *path)
@@ -80,7 +95,7 @@ static int parse_volume_options(GlusterConf *gconf, char *path)
* 'server' specifies the server where the volume file specification for
* the given volume resides. This can be either hostname, ipv4 address
* or ipv6 address. ipv6 address needs to be within square brackets [ ].
* If transport type is 'unix', then 'server' field should not be specified.
* If transport type is 'unix', then 'server' field should not be specifed.
* The 'socket' field needs to be populated with the path to unix domain
* socket.
*
@@ -117,7 +132,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
/* transport */
if (!uri->scheme || !strcmp(uri->scheme, "gluster")) {
if (!strcmp(uri->scheme, "gluster")) {
gconf->transport = g_strdup("tcp");
} else if (!strcmp(uri->scheme, "gluster+tcp")) {
gconf->transport = g_strdup("tcp");
@@ -153,7 +168,7 @@ static int qemu_gluster_parseuri(GlusterConf *gconf, const char *filename)
}
gconf->server = g_strdup(qp->p[0].value);
} else {
gconf->server = g_strdup(uri->server ? uri->server : "localhost");
gconf->server = g_strdup(uri->server);
gconf->port = uri->port;
}
@@ -165,8 +180,7 @@ out:
return ret;
}
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
Error **errp)
static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename)
{
struct glfs *glfs = NULL;
int ret;
@@ -174,8 +188,8 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = qemu_gluster_parseuri(gconf, filename);
if (ret < 0) {
error_setg(errp, "Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
error_report("Usage: file=gluster[+transport]://[server[:port]]/"
"volname/image[?socket=...]");
errno = -ret;
goto out;
}
@@ -202,16 +216,9 @@ static struct glfs *qemu_gluster_init(GlusterConf *gconf, const char *filename,
ret = glfs_init(glfs);
if (ret) {
error_setg_errno(errp, errno,
"Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s", gconf->server,
gconf->port, gconf->volname, gconf->image,
gconf->transport);
/* glfs_init sometimes doesn't set errno although docs suggest that */
if (errno == 0)
errno = EINVAL;
error_report("Gluster connection failed for server=%s port=%d "
"volume=%s image=%s transport=%s\n", gconf->server, gconf->port,
gconf->volname, gconf->image, gconf->transport);
goto out;
}
return glfs;
@@ -225,101 +232,96 @@ out:
return NULL;
}
static void qemu_gluster_complete_aio(void *opaque)
static void qemu_gluster_complete_aio(GlusterAIOCB *acb, BDRVGlusterState *s)
{
GlusterAIOCB *acb = (GlusterAIOCB *)opaque;
int ret;
bool *finished = acb->finished;
BlockDriverCompletionFunc *cb = acb->common.cb;
void *opaque = acb->common.opaque;
qemu_bh_delete(acb->bh);
acb->bh = NULL;
qemu_coroutine_enter(acb->coroutine, NULL);
}
/*
* AIO callback routine called from GlusterFS thread.
*/
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
if (!ret || ret == acb->size) {
acb->ret = 0; /* Success */
} else if (ret < 0) {
acb->ret = ret; /* Read/Write failed */
if (!acb->ret || acb->ret == acb->size) {
ret = 0; /* Success */
} else if (acb->ret < 0) {
ret = acb->ret; /* Read/Write failed */
} else {
acb->ret = -EIO; /* Partial read/write - fail it */
ret = -EIO; /* Partial read/write - fail it */
}
acb->bh = qemu_bh_new(qemu_gluster_complete_aio, acb);
qemu_bh_schedule(acb->bh);
s->qemu_aio_count--;
qemu_aio_release(acb);
cb(opaque, ret);
if (finished) {
*finished = true;
}
}
/* TODO Convert to fine grained options */
static QemuOptsList runtime_opts = {
.name = "gluster",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the gluster image",
},
{ /* end of list */ }
},
};
static void qemu_gluster_parse_flags(int bdrv_flags, int *open_flags)
static void qemu_gluster_aio_event_reader(void *opaque)
{
assert(open_flags != NULL);
BDRVGlusterState *s = opaque;
ssize_t ret;
*open_flags |= O_BINARY;
do {
char *p = (char *)&s->event_acb;
if (bdrv_flags & BDRV_O_RDWR) {
*open_flags |= O_RDWR;
} else {
*open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
*open_flags |= O_DIRECT;
}
ret = read(s->fds[GLUSTER_FD_READ], p + s->event_reader_pos,
sizeof(s->event_acb) - s->event_reader_pos);
if (ret > 0) {
s->event_reader_pos += ret;
if (s->event_reader_pos == sizeof(s->event_acb)) {
s->event_reader_pos = 0;
qemu_gluster_complete_aio(s->event_acb, s);
}
}
} while (ret < 0 && errno == EINTR);
}
static int qemu_gluster_open(BlockDriverState *bs, QDict *options,
int bdrv_flags, Error **errp)
static int qemu_gluster_aio_flush_cb(void *opaque)
{
BDRVGlusterState *s = opaque;
return (s->qemu_aio_count > 0);
}
static int qemu_gluster_open(BlockDriverState *bs, const char *filename,
int bdrv_flags)
{
BDRVGlusterState *s = bs->opaque;
int open_flags = 0;
int open_flags = O_BINARY;
int ret = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
QemuOpts *opts;
Error *local_err = NULL;
const char *filename;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
ret = -EINVAL;
goto out;
}
filename = qemu_opt_get(opts, "filename");
s->glfs = qemu_gluster_init(gconf, filename, errp);
s->glfs = qemu_gluster_init(gconf, filename);
if (!s->glfs) {
ret = -errno;
goto out;
}
qemu_gluster_parse_flags(bdrv_flags, &open_flags);
if (bdrv_flags & BDRV_O_RDWR) {
open_flags |= O_RDWR;
} else {
open_flags |= O_RDONLY;
}
if ((bdrv_flags & BDRV_O_NOCACHE)) {
open_flags |= O_DIRECT;
}
s->fd = glfs_open(s->glfs, gconf->image, open_flags);
if (!s->fd) {
ret = -errno;
goto out;
}
ret = qemu_pipe(s->fds);
if (ret < 0) {
ret = -errno;
goto out;
}
fcntl(s->fds[GLUSTER_FD_READ], F_SETFL, O_NONBLOCK);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ],
qemu_gluster_aio_event_reader, NULL, qemu_gluster_aio_flush_cb, s);
out:
qemu_opts_del(opts);
qemu_gluster_gconf_free(gconf);
if (!ret) {
return ret;
@@ -333,159 +335,16 @@ out:
return ret;
}
typedef struct BDRVGlusterReopenState {
struct glfs *glfs;
struct glfs_fd *fd;
} BDRVGlusterReopenState;
static int qemu_gluster_reopen_prepare(BDRVReopenState *state,
BlockReopenQueue *queue, Error **errp)
{
int ret = 0;
BDRVGlusterReopenState *reop_s;
GlusterConf *gconf = NULL;
int open_flags = 0;
assert(state != NULL);
assert(state->bs != NULL);
state->opaque = g_malloc0(sizeof(BDRVGlusterReopenState));
reop_s = state->opaque;
qemu_gluster_parse_flags(state->flags, &open_flags);
gconf = g_malloc0(sizeof(GlusterConf));
reop_s->glfs = qemu_gluster_init(gconf, state->bs->filename, errp);
if (reop_s->glfs == NULL) {
ret = -errno;
goto exit;
}
reop_s->fd = glfs_open(reop_s->glfs, gconf->image, open_flags);
if (reop_s->fd == NULL) {
/* reops->glfs will be cleaned up in _abort */
ret = -errno;
goto exit;
}
exit:
/* state->opaque will be freed in either the _abort or _commit */
qemu_gluster_gconf_free(gconf);
return ret;
}
static void qemu_gluster_reopen_commit(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
BDRVGlusterState *s = state->bs->opaque;
/* close the old */
if (s->fd) {
glfs_close(s->fd);
}
if (s->glfs) {
glfs_fini(s->glfs);
}
/* use the newly opened image / connection */
s->fd = reop_s->fd;
s->glfs = reop_s->glfs;
g_free(state->opaque);
state->opaque = NULL;
return;
}
static void qemu_gluster_reopen_abort(BDRVReopenState *state)
{
BDRVGlusterReopenState *reop_s = state->opaque;
if (reop_s == NULL) {
return;
}
if (reop_s->fd) {
glfs_close(reop_s->fd);
}
if (reop_s->glfs) {
glfs_fini(reop_s->glfs);
}
g_free(state->opaque);
state->opaque = NULL;
return;
}
#ifdef CONFIG_GLUSTERFS_ZEROFILL
static coroutine_fn int qemu_gluster_co_write_zeroes(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, BdrvRequestFlags flags)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
off_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
ret = glfs_zerofill_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
static inline bool gluster_supports_zerofill(void)
{
return 1;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return glfs_zerofill(fd, offset, size);
}
#else
static inline bool gluster_supports_zerofill(void)
{
return 0;
}
static inline int qemu_gluster_zerofill(struct glfs_fd *fd, int64_t offset,
int64_t size)
{
return 0;
}
#endif
static int qemu_gluster_create(const char *filename,
QEMUOptionParameter *options, Error **errp)
QEMUOptionParameter *options)
{
struct glfs *glfs;
struct glfs_fd *fd;
int ret = 0;
int prealloc = 0;
int64_t total_size = 0;
GlusterConf *gconf = g_malloc0(sizeof(GlusterConf));
glfs = qemu_gluster_init(gconf, filename, errp);
glfs = qemu_gluster_init(gconf, filename);
if (!glfs) {
ret = -errno;
goto out;
@@ -494,19 +353,6 @@ static int qemu_gluster_create(const char *filename,
while (options && options->name) {
if (!strcmp(options->name, BLOCK_OPT_SIZE)) {
total_size = options->value.n / BDRV_SECTOR_SIZE;
} else if (!strcmp(options->name, BLOCK_OPT_PREALLOC)) {
if (!options->value.s || !strcmp(options->value.s, "off")) {
prealloc = 0;
} else if (!strcmp(options->value.s, "full") &&
gluster_supports_zerofill()) {
prealloc = 1;
} else {
error_setg(errp, "Invalid preallocation mode: '%s'"
" or GlusterFS doesn't support zerofill API",
options->value.s);
ret = -EINVAL;
goto out;
}
}
options++;
}
@@ -516,15 +362,9 @@ static int qemu_gluster_create(const char *filename,
if (!fd) {
ret = -errno;
} else {
if (!glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE)) {
if (prealloc && qemu_gluster_zerofill(fd, 0,
total_size * BDRV_SECTOR_SIZE)) {
ret = -errno;
}
} else {
if (glfs_ftruncate(fd, total_size * BDRV_SECTOR_SIZE) != 0) {
ret = -errno;
}
if (glfs_close(fd) != 0) {
ret = -errno;
}
@@ -537,18 +377,72 @@ out:
return ret;
}
static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov, int write)
static void qemu_gluster_aio_cancel(BlockDriverAIOCB *blockacb)
{
GlusterAIOCB *acb = (GlusterAIOCB *)blockacb;
bool finished = false;
acb->finished = &finished;
while (!finished) {
qemu_aio_wait();
}
}
static const AIOCBInfo gluster_aiocb_info = {
.aiocb_size = sizeof(GlusterAIOCB),
.cancel = qemu_gluster_aio_cancel,
};
static void gluster_finish_aiocb(struct glfs_fd *fd, ssize_t ret, void *arg)
{
GlusterAIOCB *acb = (GlusterAIOCB *)arg;
BlockDriverState *bs = acb->common.bs;
BDRVGlusterState *s = bs->opaque;
int retval;
acb->ret = ret;
retval = qemu_write_full(s->fds[GLUSTER_FD_WRITE], &acb, sizeof(acb));
if (retval != sizeof(acb)) {
/*
* Gluster AIO callback thread failed to notify the waiting
* QEMU thread about IO completion.
*
* Complete this IO request and make the disk inaccessible for
* subsequent reads and writes.
*/
error_report("Gluster failed to notify QEMU about IO completion");
qemu_mutex_lock_iothread(); /* We are in gluster thread context */
acb->common.cb(acb->common.opaque, -EIO);
qemu_aio_release(acb);
s->qemu_aio_count--;
close(s->fds[GLUSTER_FD_READ]);
close(s->fds[GLUSTER_FD_WRITE]);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL,
NULL);
bs->drv = NULL; /* Make the disk inaccessible */
qemu_mutex_unlock_iothread();
}
}
static BlockDriverAIOCB *qemu_gluster_aio_rw(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque, int write)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
size_t size;
off_t offset;
offset = sector_num * BDRV_SECTOR_SIZE;
size = nb_sectors * BDRV_SECTOR_SIZE;
s->qemu_aio_count++;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = size;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
if (write) {
ret = glfs_pwritev_async(s->fd, qiov->iov, qiov->niov, offset, 0,
@@ -559,96 +453,55 @@ static coroutine_fn int qemu_gluster_co_rw(BlockDriverState *bs,
}
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
}
static int qemu_gluster_truncate(BlockDriverState *bs, int64_t offset)
static BlockDriverAIOCB *qemu_gluster_aio_readv(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 0);
}
static BlockDriverAIOCB *qemu_gluster_aio_writev(BlockDriverState *bs,
int64_t sector_num, QEMUIOVector *qiov, int nb_sectors,
BlockDriverCompletionFunc *cb, void *opaque)
{
return qemu_gluster_aio_rw(bs, sector_num, qiov, nb_sectors, cb, opaque, 1);
}
static BlockDriverAIOCB *qemu_gluster_aio_flush(BlockDriverState *bs,
BlockDriverCompletionFunc *cb, void *opaque)
{
int ret;
GlusterAIOCB *acb;
BDRVGlusterState *s = bs->opaque;
ret = glfs_ftruncate(s->fd, offset);
if (ret < 0) {
return -errno;
}
return 0;
}
static coroutine_fn int qemu_gluster_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 0);
}
static coroutine_fn int qemu_gluster_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, QEMUIOVector *qiov)
{
return qemu_gluster_co_rw(bs, sector_num, nb_sectors, qiov, 1);
}
static coroutine_fn int qemu_gluster_co_flush_to_disk(BlockDriverState *bs)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
acb = qemu_aio_get(&gluster_aiocb_info, bs, cb, opaque);
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
acb->finished = NULL;
s->qemu_aio_count++;
ret = glfs_fsync_async(s->fd, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
return &acb->common;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
s->qemu_aio_count--;
qemu_aio_release(acb);
return NULL;
}
#ifdef CONFIG_GLUSTERFS_DISCARD
static coroutine_fn int qemu_gluster_co_discard(BlockDriverState *bs,
int64_t sector_num, int nb_sectors)
{
int ret;
GlusterAIOCB *acb = g_slice_new(GlusterAIOCB);
BDRVGlusterState *s = bs->opaque;
size_t size = nb_sectors * BDRV_SECTOR_SIZE;
off_t offset = sector_num * BDRV_SECTOR_SIZE;
acb->size = 0;
acb->ret = 0;
acb->coroutine = qemu_coroutine_self();
ret = glfs_discard_async(s->fd, offset, size, &gluster_finish_aiocb, acb);
if (ret < 0) {
ret = -errno;
goto out;
}
qemu_coroutine_yield();
ret = acb->ret;
out:
g_slice_free(GlusterAIOCB, acb);
return ret;
}
#endif
static int64_t qemu_gluster_getlength(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
@@ -680,6 +533,10 @@ static void qemu_gluster_close(BlockDriverState *bs)
{
BDRVGlusterState *s = bs->opaque;
close(s->fds[GLUSTER_FD_READ]);
close(s->fds[GLUSTER_FD_WRITE]);
qemu_aio_set_fd_handler(s->fds[GLUSTER_FD_READ], NULL, NULL, NULL, NULL);
if (s->fd) {
glfs_close(s->fd);
s->fd = NULL;
@@ -687,23 +544,12 @@ static void qemu_gluster_close(BlockDriverState *bs)
glfs_fini(s->glfs);
}
static int qemu_gluster_has_zero_init(BlockDriverState *bs)
{
/* GlusterFS volume could be backed by a block device */
return 0;
}
static QEMUOptionParameter qemu_gluster_create_options[] = {
{
.name = BLOCK_OPT_SIZE,
.type = OPT_SIZE,
.help = "Virtual disk size"
},
{
.name = BLOCK_OPT_PREALLOC,
.type = OPT_STRING,
.help = "Preallocation mode (allowed values: off, full)"
},
{ NULL }
};
@@ -711,26 +557,14 @@ static BlockDriver bdrv_gluster = {
.format_name = "gluster",
.protocol_name = "gluster",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.create_options = qemu_gluster_create_options,
};
@@ -738,26 +572,14 @@ static BlockDriver bdrv_gluster_tcp = {
.format_name = "gluster",
.protocol_name = "gluster+tcp",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.create_options = qemu_gluster_create_options,
};
@@ -765,26 +587,14 @@ static BlockDriver bdrv_gluster_unix = {
.format_name = "gluster",
.protocol_name = "gluster+unix",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.create_options = qemu_gluster_create_options,
};
@@ -792,26 +602,14 @@ static BlockDriver bdrv_gluster_rdma = {
.format_name = "gluster",
.protocol_name = "gluster+rdma",
.instance_size = sizeof(BDRVGlusterState),
.bdrv_needs_filename = true,
.bdrv_file_open = qemu_gluster_open,
.bdrv_reopen_prepare = qemu_gluster_reopen_prepare,
.bdrv_reopen_commit = qemu_gluster_reopen_commit,
.bdrv_reopen_abort = qemu_gluster_reopen_abort,
.bdrv_close = qemu_gluster_close,
.bdrv_create = qemu_gluster_create,
.bdrv_getlength = qemu_gluster_getlength,
.bdrv_get_allocated_file_size = qemu_gluster_allocated_file_size,
.bdrv_truncate = qemu_gluster_truncate,
.bdrv_co_readv = qemu_gluster_co_readv,
.bdrv_co_writev = qemu_gluster_co_writev,
.bdrv_co_flush_to_disk = qemu_gluster_co_flush_to_disk,
.bdrv_has_zero_init = qemu_gluster_has_zero_init,
#ifdef CONFIG_GLUSTERFS_DISCARD
.bdrv_co_discard = qemu_gluster_co_discard,
#endif
#ifdef CONFIG_GLUSTERFS_ZEROFILL
.bdrv_co_write_zeroes = qemu_gluster_co_write_zeroes,
#endif
.bdrv_aio_readv = qemu_gluster_aio_readv,
.bdrv_aio_writev = qemu_gluster_aio_writev,
.bdrv_aio_flush = qemu_gluster_aio_flush,
.create_options = qemu_gluster_create_options,
};

File diff suppressed because it is too large Load Diff

View File

@@ -8,10 +8,10 @@
* See the COPYING file in the top-level directory.
*/
#include "qemu-common.h"
#include "block/aio.h"
#include "qemu/queue.h"
#include "qemu-aio.h"
#include "qemu-queue.h"
#include "block/raw-aio.h"
#include "qemu/event_notifier.h"
#include "event_notifier.h"
#include <libaio.h>
@@ -39,6 +39,7 @@ struct qemu_laiocb {
struct qemu_laio_state {
io_context_t ctx;
EventNotifier e;
int count;
};
static inline ssize_t io_event_ret(struct io_event *ev)
@@ -54,6 +55,8 @@ static void qemu_laio_process_completion(struct qemu_laio_state *s,
{
int ret;
s->count--;
ret = laiocb->ret;
if (ret != -ECANCELED) {
if (ret == laiocb->nbytes) {
@@ -98,6 +101,13 @@ static void qemu_laio_completion_cb(EventNotifier *e)
}
}
static int qemu_laio_flush_cb(EventNotifier *e)
{
struct qemu_laio_state *s = container_of(e, struct qemu_laio_state, e);
return (s->count > 0) ? 1 : 0;
}
static void laio_cancel(BlockDriverAIOCB *blockacb)
{
struct qemu_laiocb *laiocb = (struct qemu_laiocb *)blockacb;
@@ -167,11 +177,14 @@ BlockDriverAIOCB *laio_submit(BlockDriverState *bs, void *aio_ctx, int fd,
goto out_free_aiocb;
}
io_set_eventfd(&laiocb->iocb, event_notifier_get_fd(&s->e));
s->count++;
if (io_submit(s->ctx, 1, &iocbs) < 0)
goto out_free_aiocb;
goto out_dec_count;
return &laiocb->common;
out_dec_count:
s->count--;
out_free_aiocb:
qemu_aio_release(laiocb);
return NULL;
@@ -190,7 +203,8 @@ void *laio_init(void)
goto out_close_efd;
}
qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb);
qemu_aio_set_event_notifier(&s->e, qemu_laio_completion_cb,
qemu_laio_flush_cb);
return s;

View File

@@ -12,52 +12,33 @@
*/
#include "trace.h"
#include "block/blockjob.h"
#include "block/block_int.h"
#include "blockjob.h"
#include "block_int.h"
#include "qemu/ratelimit.h"
#include "qemu/bitmap.h"
#define SLICE_TIME 100000000ULL /* ns */
#define MAX_IN_FLIGHT 16
enum {
/*
* Size of data buffer for populating the image file. This should be large
* enough to process multiple clusters in a single call, so that populating
* contiguous regions of the image is efficient.
*/
BLOCK_SIZE = 512 * BDRV_SECTORS_PER_DIRTY_CHUNK, /* in bytes */
};
/* The mirroring buffer is a list of granularity-sized chunks.
* Free chunks are organized in a list.
*/
typedef struct MirrorBuffer {
QSIMPLEQ_ENTRY(MirrorBuffer) next;
} MirrorBuffer;
#define SLICE_TIME 100000000ULL /* ns */
typedef struct MirrorBlockJob {
BlockJob common;
RateLimit limit;
BlockDriverState *target;
BlockDriverState *base;
bool is_none_mode;
MirrorSyncMode mode;
BlockdevOnError on_source_error, on_target_error;
bool synced;
bool should_complete;
int64_t sector_num;
int64_t granularity;
size_t buf_size;
unsigned long *cow_bitmap;
BdrvDirtyBitmap *dirty_bitmap;
HBitmapIter hbi;
uint8_t *buf;
QSIMPLEQ_HEAD(, MirrorBuffer) buf_free;
int buf_free_count;
unsigned long *in_flight_bitmap;
int in_flight;
int ret;
} MirrorBlockJob;
typedef struct MirrorOp {
MirrorBlockJob *s;
QEMUIOVector qiov;
int64_t sector_num;
int nb_sectors;
} MirrorOp;
static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
int error)
{
@@ -71,251 +52,51 @@ static BlockErrorAction mirror_error_action(MirrorBlockJob *s, bool read,
}
}
static void mirror_iteration_done(MirrorOp *op, int ret)
{
MirrorBlockJob *s = op->s;
struct iovec *iov;
int64_t chunk_num;
int i, nb_chunks, sectors_per_chunk;
trace_mirror_iteration_done(s, op->sector_num, op->nb_sectors, ret);
s->in_flight--;
iov = op->qiov.iov;
for (i = 0; i < op->qiov.niov; i++) {
MirrorBuffer *buf = (MirrorBuffer *) iov[i].iov_base;
QSIMPLEQ_INSERT_TAIL(&s->buf_free, buf, next);
s->buf_free_count++;
}
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
chunk_num = op->sector_num / sectors_per_chunk;
nb_chunks = op->nb_sectors / sectors_per_chunk;
bitmap_clear(s->in_flight_bitmap, chunk_num, nb_chunks);
if (s->cow_bitmap && ret >= 0) {
bitmap_set(s->cow_bitmap, chunk_num, nb_chunks);
}
qemu_iovec_destroy(&op->qiov);
g_slice_free(MirrorOp, op);
/* Enter coroutine when it is not sleeping. The coroutine sleeps to
* rate-limit itself. The coroutine will eventually resume since there is
* a sleep timeout so don't wake it early.
*/
if (s->common.busy) {
qemu_coroutine_enter(s->common.co, NULL);
}
}
static void mirror_write_complete(void *opaque, int ret)
{
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty(source, op->sector_num, op->nb_sectors);
action = mirror_error_action(s, false, -ret);
if (action == BDRV_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
}
}
mirror_iteration_done(op, ret);
}
static void mirror_read_complete(void *opaque, int ret)
{
MirrorOp *op = opaque;
MirrorBlockJob *s = op->s;
if (ret < 0) {
BlockDriverState *source = s->common.bs;
BlockErrorAction action;
bdrv_set_dirty(source, op->sector_num, op->nb_sectors);
action = mirror_error_action(s, true, -ret);
if (action == BDRV_ACTION_REPORT && s->ret >= 0) {
s->ret = ret;
}
mirror_iteration_done(op, ret);
return;
}
bdrv_aio_writev(s->target, op->sector_num, &op->qiov, op->nb_sectors,
mirror_write_complete, op);
}
static uint64_t coroutine_fn mirror_iteration(MirrorBlockJob *s)
static int coroutine_fn mirror_iteration(MirrorBlockJob *s,
BlockErrorAction *p_action)
{
BlockDriverState *source = s->common.bs;
int nb_sectors, sectors_per_chunk, nb_chunks;
int64_t end, sector_num, next_chunk, next_sector, hbitmap_next_sector;
uint64_t delay_ns;
MirrorOp *op;
BlockDriverState *target = s->target;
QEMUIOVector qiov;
int ret, nb_sectors;
int64_t end;
struct iovec iov;
s->sector_num = hbitmap_iter_next(&s->hbi);
if (s->sector_num < 0) {
bdrv_dirty_iter_init(source, s->dirty_bitmap, &s->hbi);
s->sector_num = hbitmap_iter_next(&s->hbi);
trace_mirror_restart_iter(s,
bdrv_get_dirty_count(source, s->dirty_bitmap));
assert(s->sector_num >= 0);
}
hbitmap_next_sector = s->sector_num;
sector_num = s->sector_num;
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
end = s->common.len >> BDRV_SECTOR_BITS;
/* Extend the QEMUIOVector to include all adjacent blocks that will
* be copied in this operation.
*
* We have to do this if we have no backing file yet in the destination,
* and the cluster size is very large. Then we need to do COW ourselves.
* The first time a cluster is copied, copy it entirely. Note that,
* because both the granularity and the cluster size are powers of two,
* the number of sectors to copy cannot exceed one cluster.
*
* We also want to extend the QEMUIOVector to include more adjacent
* dirty blocks if possible, to limit the number of I/O operations and
* run efficiently even with a small granularity.
*/
nb_chunks = 0;
nb_sectors = 0;
next_sector = sector_num;
next_chunk = sector_num / sectors_per_chunk;
/* Wait for I/O to this cluster (from a previous iteration) to be done. */
while (test_bit(next_chunk, s->in_flight_bitmap)) {
trace_mirror_yield_in_flight(s, sector_num, s->in_flight);
qemu_coroutine_yield();
}
do {
int added_sectors, added_chunks;
if (!bdrv_get_dirty(source, s->dirty_bitmap, next_sector) ||
test_bit(next_chunk, s->in_flight_bitmap)) {
assert(nb_sectors > 0);
break;
}
added_sectors = sectors_per_chunk;
if (s->cow_bitmap && !test_bit(next_chunk, s->cow_bitmap)) {
bdrv_round_to_clusters(s->target,
next_sector, added_sectors,
&next_sector, &added_sectors);
/* On the first iteration, the rounding may make us copy
* sectors before the first dirty one.
*/
if (next_sector < sector_num) {
assert(nb_sectors == 0);
sector_num = next_sector;
next_chunk = next_sector / sectors_per_chunk;
}
}
added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));
added_chunks = (added_sectors + sectors_per_chunk - 1) / sectors_per_chunk;
/* When doing COW, it may happen that there is not enough space for
* a full cluster. Wait if that is the case.
*/
while (nb_chunks == 0 && s->buf_free_count < added_chunks) {
trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);
qemu_coroutine_yield();
}
if (s->buf_free_count < nb_chunks + added_chunks) {
trace_mirror_break_buf_busy(s, nb_chunks, s->in_flight);
break;
}
/* We have enough free space to copy these sectors. */
bitmap_set(s->in_flight_bitmap, next_chunk, added_chunks);
nb_sectors += added_sectors;
nb_chunks += added_chunks;
next_sector += added_sectors;
next_chunk += added_chunks;
if (!s->synced && s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, added_sectors);
} else {
delay_ns = 0;
}
} while (delay_ns == 0 && next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_slice_new(MirrorOp);
op->s = s;
op->sector_num = sector_num;
op->nb_sectors = nb_sectors;
/* Now make a QEMUIOVector taking enough granularity-sized chunks
* from s->buf_free.
*/
qemu_iovec_init(&op->qiov, nb_chunks);
next_sector = sector_num;
while (nb_chunks-- > 0) {
MirrorBuffer *buf = QSIMPLEQ_FIRST(&s->buf_free);
QSIMPLEQ_REMOVE_HEAD(&s->buf_free, next);
s->buf_free_count--;
qemu_iovec_add(&op->qiov, buf, s->granularity);
/* Advance the HBitmapIter in parallel, so that we do not examine
* the same sector twice.
*/
if (next_sector > hbitmap_next_sector
&& bdrv_get_dirty(source, s->dirty_bitmap, next_sector)) {
hbitmap_next_sector = hbitmap_iter_next(&s->hbi);
}
next_sector += sectors_per_chunk;
}
bdrv_reset_dirty(source, sector_num, nb_sectors);
s->sector_num = bdrv_get_next_dirty(source, s->sector_num);
nb_sectors = MIN(BDRV_SECTORS_PER_DIRTY_CHUNK, end - s->sector_num);
bdrv_reset_dirty(source, s->sector_num, nb_sectors);
/* Copy the dirty cluster. */
s->in_flight++;
trace_mirror_one_iteration(s, sector_num, nb_sectors);
bdrv_aio_readv(source, sector_num, &op->qiov, nb_sectors,
mirror_read_complete, op);
return delay_ns;
}
iov.iov_base = s->buf;
iov.iov_len = nb_sectors * 512;
qemu_iovec_init_external(&qiov, &iov, 1);
static void mirror_free_init(MirrorBlockJob *s)
{
int granularity = s->granularity;
size_t buf_size = s->buf_size;
uint8_t *buf = s->buf;
assert(s->buf_free_count == 0);
QSIMPLEQ_INIT(&s->buf_free);
while (buf_size != 0) {
MirrorBuffer *cur = (MirrorBuffer *)buf;
QSIMPLEQ_INSERT_TAIL(&s->buf_free, cur, next);
s->buf_free_count++;
buf_size -= granularity;
buf += granularity;
trace_mirror_one_iteration(s, s->sector_num, nb_sectors);
ret = bdrv_co_readv(source, s->sector_num, nb_sectors, &qiov);
if (ret < 0) {
*p_action = mirror_error_action(s, true, -ret);
goto fail;
}
}
static void mirror_drain(MirrorBlockJob *s)
{
while (s->in_flight > 0) {
qemu_coroutine_yield();
ret = bdrv_co_writev(target, s->sector_num, nb_sectors, &qiov);
if (ret < 0) {
*p_action = mirror_error_action(s, false, -ret);
s->synced = false;
goto fail;
}
return 0;
fail:
/* Try again later. */
bdrv_set_dirty(source, s->sector_num, nb_sectors);
return ret;
}
static void coroutine_fn mirror_run(void *opaque)
{
MirrorBlockJob *s = opaque;
BlockDriverState *bs = s->common.bs;
int64_t sector_num, end, sectors_per_chunk, length;
uint64_t last_pause_ns;
BlockDriverInfo bdi;
char backing_filename[1024];
int64_t sector_num, end;
int ret = 0;
int n;
@@ -324,43 +105,22 @@ static void coroutine_fn mirror_run(void *opaque)
}
s->common.len = bdrv_getlength(bs);
if (s->common.len <= 0) {
ret = s->common.len;
goto immediate_exit;
}
length = DIV_ROUND_UP(s->common.len, s->granularity);
s->in_flight_bitmap = bitmap_new(length);
/* If we have no backing file yet in the destination, we cannot let
* the destination do COW. Instead, we copy sectors around the
* dirty data if needed. We need a bitmap to do that.
*/
bdrv_get_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
if (backing_filename[0] && !s->target->backing_hd) {
ret = bdrv_get_info(s->target, &bdi);
if (ret < 0) {
goto immediate_exit;
}
if (s->granularity < bdi.cluster_size) {
s->buf_size = MAX(s->buf_size, bdi.cluster_size);
s->cow_bitmap = bitmap_new(length);
}
if (s->common.len < 0) {
block_job_completed(&s->common, s->common.len);
return;
}
end = s->common.len >> BDRV_SECTOR_BITS;
s->buf = qemu_blockalign(bs, s->buf_size);
sectors_per_chunk = s->granularity >> BDRV_SECTOR_BITS;
mirror_free_init(s);
s->buf = qemu_blockalign(bs, BLOCK_SIZE);
if (!s->is_none_mode) {
if (s->mode != MIRROR_SYNC_MODE_NONE) {
/* First part, loop on the sectors and initialize the dirty bitmap. */
BlockDriverState *base = s->base;
BlockDriverState *base;
base = s->mode == MIRROR_SYNC_MODE_FULL ? NULL : bs->backing_hd;
for (sector_num = 0; sector_num < end; ) {
int64_t next = (sector_num | (sectors_per_chunk - 1)) + 1;
ret = bdrv_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
int64_t next = (sector_num | (BDRV_SECTORS_PER_DIRTY_CHUNK - 1)) + 1;
ret = bdrv_co_is_allocated_above(bs, base,
sector_num, next - sector_num, &n);
if (ret < 0) {
goto immediate_exit;
@@ -376,42 +136,24 @@ static void coroutine_fn mirror_run(void *opaque)
}
}
bdrv_dirty_iter_init(bs, s->dirty_bitmap, &s->hbi);
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
s->sector_num = -1;
for (;;) {
uint64_t delay_ns = 0;
uint64_t delay_ns;
int64_t cnt;
bool should_complete;
if (s->ret < 0) {
ret = s->ret;
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
/* Note that even when no rate limit is applied we need to yield
* periodically with no pending I/O so that qemu_aio_flush() returns.
* We do so every SLICE_TIME nanoseconds, or when there is an error,
* or when the source is clean, whichever comes first.
*/
if (qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - last_pause_ns < SLICE_TIME &&
s->common.iostatus == BLOCK_DEVICE_IO_STATUS_OK) {
if (s->in_flight == MAX_IN_FLIGHT || s->buf_free_count == 0 ||
(cnt == 0 && s->in_flight > 0)) {
trace_mirror_yield(s, s->in_flight, s->buf_free_count, cnt);
qemu_coroutine_yield();
continue;
} else if (cnt != 0) {
delay_ns = mirror_iteration(s);
if (delay_ns == 0) {
continue;
}
cnt = bdrv_get_dirty_count(bs);
if (cnt != 0) {
BlockErrorAction action = BDRV_ACTION_REPORT;
ret = mirror_iteration(s, &action);
if (ret < 0 && action == BDRV_ACTION_REPORT) {
goto immediate_exit;
}
cnt = bdrv_get_dirty_count(bs);
}
should_complete = false;
if (s->in_flight == 0 && cnt == 0) {
if (cnt == 0) {
trace_mirror_before_flush(s);
ret = bdrv_flush(s->target);
if (ret < 0) {
@@ -432,7 +174,7 @@ static void coroutine_fn mirror_run(void *opaque)
should_complete = s->should_complete ||
block_job_is_cancelled(&s->common);
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
}
@@ -447,21 +189,31 @@ static void coroutine_fn mirror_run(void *opaque)
*/
trace_mirror_before_drain(s, cnt);
bdrv_drain_all();
cnt = bdrv_get_dirty_count(bs, s->dirty_bitmap);
cnt = bdrv_get_dirty_count(bs);
}
ret = 0;
trace_mirror_before_sleep(s, cnt, s->synced, delay_ns);
trace_mirror_before_sleep(s, cnt, s->synced);
if (!s->synced) {
/* Publish progress */
s->common.offset = (end - cnt) * BDRV_SECTOR_SIZE;
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
s->common.offset = end * BDRV_SECTOR_SIZE - cnt * BLOCK_SIZE;
if (s->common.speed) {
delay_ns = ratelimit_calculate_delay(&s->limit, BDRV_SECTORS_PER_DIRTY_CHUNK);
} else {
delay_ns = 0;
}
/* Note that even when no rate limit is applied we need to yield
* with no pending I/O here so that qemu_aio_flush() returns.
*/
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
if (block_job_is_cancelled(&s->common)) {
break;
}
} else if (!should_complete) {
delay_ns = (s->in_flight == 0 && cnt == 0 ? SLICE_TIME : 0);
block_job_sleep_ns(&s->common, QEMU_CLOCK_REALTIME, delay_ns);
delay_ns = (cnt == 0 ? SLICE_TIME : 0);
block_job_sleep_ns(&s->common, rt_clock, delay_ns);
} else if (cnt == 0) {
/* The two disks are in sync. Exit and report successful
* completion.
@@ -470,39 +222,20 @@ static void coroutine_fn mirror_run(void *opaque)
s->common.cancelled = false;
break;
}
last_pause_ns = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
}
immediate_exit:
if (s->in_flight > 0) {
/* We get here only if something went wrong. Either the job failed,
* or it was cancelled prematurely so that we do not guarantee that
* the target is a copy of the source.
*/
assert(ret < 0 || (!s->synced && block_job_is_cancelled(&s->common)));
mirror_drain(s);
}
assert(s->in_flight == 0);
qemu_vfree(s->buf);
g_free(s->cow_bitmap);
g_free(s->in_flight_bitmap);
bdrv_release_dirty_bitmap(bs, s->dirty_bitmap);
g_free(s->buf);
bdrv_set_dirty_tracking(bs, false);
bdrv_iostatus_disable(s->target);
if (s->should_complete && ret == 0) {
if (bdrv_get_flags(s->target) != bdrv_get_flags(s->common.bs)) {
bdrv_reopen(s->target, bdrv_get_flags(s->common.bs), NULL);
}
bdrv_swap(s->target, s->common.bs);
if (s->common.driver->job_type == BLOCK_JOB_TYPE_COMMIT) {
/* drop the bs loop chain formed by the swap: break the loop then
* trigger the unref from the top one */
BlockDriverState *p = s->base->backing_hd;
s->base->backing_hd = NULL;
bdrv_unref(p);
}
}
bdrv_unref(s->target);
bdrv_close(s->target);
bdrv_delete(s->target);
block_job_completed(&s->common, ret);
}
@@ -527,12 +260,14 @@ static void mirror_iostatus_reset(BlockJob *job)
static void mirror_complete(BlockJob *job, Error **errp)
{
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common);
Error *local_err = NULL;
int ret;
ret = bdrv_open_backing_file(s->target, NULL, &local_err);
ret = bdrv_open_backing_file(s->target);
if (ret < 0) {
error_propagate(errp, local_err);
char backing_filename[PATH_MAX];
bdrv_get_full_backing_filename(s->target, backing_filename,
sizeof(backing_filename));
error_set(errp, QERR_OPEN_FILE_FAILED, backing_filename);
return;
}
if (!s->synced) {
@@ -544,49 +279,23 @@ static void mirror_complete(BlockJob *job, Error **errp)
block_job_resume(job);
}
static const BlockJobDriver mirror_job_driver = {
static BlockJobType mirror_job_type = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_MIRROR,
.job_type = "mirror",
.set_speed = mirror_set_speed,
.iostatus_reset= mirror_iostatus_reset,
.complete = mirror_complete,
};
static const BlockJobDriver commit_active_job_driver = {
.instance_size = sizeof(MirrorBlockJob),
.job_type = BLOCK_JOB_TYPE_COMMIT,
.set_speed = mirror_set_speed,
.iostatus_reset
= mirror_iostatus_reset,
.complete = mirror_complete,
};
static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity,
int64_t buf_size,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp,
const BlockJobDriver *driver,
bool is_none_mode, BlockDriverState *base)
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, MirrorSyncMode mode,
BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
MirrorBlockJob *s;
if (granularity == 0) {
/* Choose the default granularity based on the target file's cluster
* size, clamped between 4k and 64k. */
BlockDriverInfo bdi;
if (bdrv_get_info(target, &bdi) >= 0 && bdi.cluster_size != 0) {
granularity = MAX(4096, bdi.cluster_size);
granularity = MIN(65536, granularity);
} else {
granularity = 65536;
}
}
assert ((granularity & (granularity - 1)) == 0);
if ((on_source_error == BLOCKDEV_ON_ERROR_STOP ||
on_source_error == BLOCKDEV_ON_ERROR_ENOSPC) &&
!bdrv_iostatus_is_enabled(bs)) {
@@ -594,8 +303,7 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
return;
}
s = block_job_create(driver, bs, speed, cb, opaque, errp);
s = block_job_create(&mirror_job_type, bs, speed, cb, opaque, errp);
if (!s) {
return;
}
@@ -603,15 +311,8 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
s->on_source_error = on_source_error;
s->on_target_error = on_target_error;
s->target = target;
s->is_none_mode = is_none_mode;
s->base = base;
s->granularity = granularity;
s->buf_size = MAX(buf_size, granularity);
s->dirty_bitmap = bdrv_create_dirty_bitmap(bs, granularity, errp);
if (!s->dirty_bitmap) {
return;
}
s->mode = mode;
bdrv_set_dirty_tracking(bs, true);
bdrv_set_enable_write_cache(s->target, true);
bdrv_set_on_error(s->target, on_target_error, on_target_error);
bdrv_iostatus_enable(s->target);
@@ -619,80 +320,3 @@ static void mirror_start_job(BlockDriverState *bs, BlockDriverState *target,
trace_mirror_start(bs, s, s->common.co, opaque);
qemu_coroutine_enter(s->common.co, s);
}
void mirror_start(BlockDriverState *bs, BlockDriverState *target,
int64_t speed, int64_t granularity, int64_t buf_size,
MirrorSyncMode mode, BlockdevOnError on_source_error,
BlockdevOnError on_target_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
bool is_none_mode;
BlockDriverState *base;
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
base = mode == MIRROR_SYNC_MODE_TOP ? bs->backing_hd : NULL;
mirror_start_job(bs, target, speed, granularity, buf_size,
on_source_error, on_target_error, cb, opaque, errp,
&mirror_job_driver, is_none_mode, base);
}
void commit_active_start(BlockDriverState *bs, BlockDriverState *base,
int64_t speed,
BlockdevOnError on_error,
BlockDriverCompletionFunc *cb,
void *opaque, Error **errp)
{
int64_t length, base_length;
int orig_base_flags;
int ret;
Error *local_err = NULL;
orig_base_flags = bdrv_get_flags(base);
if (bdrv_reopen(base, bs->open_flags, errp)) {
return;
}
length = bdrv_getlength(bs);
if (length < 0) {
error_setg_errno(errp, -length,
"Unable to determine length of %s", bs->filename);
goto error_restore_flags;
}
base_length = bdrv_getlength(base);
if (base_length < 0) {
error_setg_errno(errp, -base_length,
"Unable to determine length of %s", base->filename);
goto error_restore_flags;
}
if (length > base_length) {
ret = bdrv_truncate(base, length);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Top image %s is larger than base image %s, and "
"resize of base image failed",
bs->filename, base->filename);
goto error_restore_flags;
}
}
bdrv_ref(base);
mirror_start_job(bs, base, speed, 0, 0,
on_error, on_error, cb, opaque, &local_err,
&commit_active_job_driver, false, base);
if (local_err) {
error_propagate(errp, local_err);
goto error_restore_flags;
}
return;
error_restore_flags:
/* ignore error and errp for bdrv_reopen, because we want to propagate
* the original error */
bdrv_reopen(base, orig_base_flags, NULL);
return;
}

View File

@@ -1,388 +0,0 @@
/*
* QEMU Block driver for NBD
*
* Copyright (C) 2008 Bull S.A.S.
* Author: Laurent Vivier <Laurent.Vivier@bull.net>
*
* Some parts:
* Copyright (C) 2007 Anthony Liguori <anthony@codemonkey.ws>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "nbd-client.h"
#include "qemu/sockets.h"
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
static void nbd_recv_coroutines_enter_all(NbdClientSession *s)
{
int i;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static void nbd_teardown_connection(NbdClientSession *client)
{
/* finish any pending coroutines */
shutdown(client->sock, 2);
nbd_recv_coroutines_enter_all(client);
qemu_aio_set_fd_handler(client->sock, NULL, NULL, NULL);
closesocket(client->sock);
client->sock = -1;
}
static void nbd_reply_ready(void *opaque)
{
NbdClientSession *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
fail:
nbd_teardown_connection(s);
}
static void nbd_restart_write(void *opaque)
{
NbdClientSession *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(NbdClientSession *s,
struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, s);
if (qiov) {
if (!s->is_unix) {
socket_set_cork(s->sock, 1);
}
rc = nbd_send_request(s->sock, request);
if (rc >= 0) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
rc = -EIO;
}
}
if (!s->is_unix) {
socket_set_cork(s->sock, 0);
}
} else {
rc = nbd_send_request(s->sock, request);
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(NbdClientSession *s,
struct nbd_request *request, struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_start(NbdClientSession *s,
struct nbd_request *request)
{
int i;
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static void nbd_coroutine_end(NbdClientSession *s,
struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_co_readv_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_READ };
struct nbd_reply reply;
ssize_t ret;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, qiov, offset);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
static int nbd_co_writev_1(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
struct nbd_request request = { .type = NBD_CMD_WRITE };
struct nbd_reply reply;
ssize_t ret;
if (!bdrv_enable_write_cache(client->bs) &&
(client->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(client, sector_num,
NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(client, sector_num, nb_sectors, qiov, offset);
}
int nbd_client_session_co_flush(NbdClientSession *client)
{
struct nbd_request request = { .type = NBD_CMD_FLUSH };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
if (client->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors)
{
struct nbd_request request = { .type = NBD_CMD_TRIM };
struct nbd_reply reply;
ssize_t ret;
if (!(client->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(client, &request);
ret = nbd_co_send_request(client, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(client, &request, &reply, NULL, 0);
}
nbd_coroutine_end(client, &request);
return -reply.error;
}
void nbd_client_session_close(NbdClientSession *client)
{
struct nbd_request request = {
.type = NBD_CMD_DISC,
.from = 0,
.len = 0
};
if (!client->bs) {
return;
}
if (client->sock == -1) {
return;
}
nbd_send_request(client->sock, &request);
nbd_teardown_connection(client);
client->bs = NULL;
}
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export)
{
int ret;
/* NBD handshake */
logout("session init %s\n", export);
qemu_set_block(sock);
ret = nbd_receive_negotiate(sock, export,
&client->nbdflags, &client->size,
&client->blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
qemu_co_mutex_init(&client->send_mutex);
qemu_co_mutex_init(&client->free_sema);
client->bs = bs;
client->sock = sock;
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
qemu_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL, client);
logout("Established connection with NBD server\n");
return 0;
}

View File

@@ -1,50 +0,0 @@
#ifndef NBD_CLIENT_H
#define NBD_CLIENT_H
#include "qemu-common.h"
#include "block/nbd.h"
#include "block/block_int.h"
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
typedef struct NbdClientSession {
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
bool is_unix;
BlockDriverState *bs;
} NbdClientSession;
int nbd_client_session_init(NbdClientSession *client, BlockDriverState *bs,
int sock, const char *export_name);
void nbd_client_session_close(NbdClientSession *client);
int nbd_client_session_co_discard(NbdClientSession *client, int64_t sector_num,
int nb_sectors);
int nbd_client_session_co_flush(NbdClientSession *client);
int nbd_client_session_co_writev(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
int nbd_client_session_co_readv(NbdClientSession *client, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov);
#endif /* NBD_CLIENT_H */

View File

@@ -26,31 +26,56 @@
* THE SOFTWARE.
*/
#include "block/nbd-client.h"
#include "qemu/uri.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "qemu/sockets.h"
#include "qapi/qmp/qjson.h"
#include "qapi/qmp/qint.h"
#include "qemu-common.h"
#include "nbd.h"
#include "uri.h"
#include "block_int.h"
#include "module.h"
#include "qemu_socket.h"
#include <sys/types.h>
#include <unistd.h>
#define EN_OPTSTR ":exportname="
/* #define DEBUG_NBD */
#if defined(DEBUG_NBD)
#define logout(fmt, ...) \
fprintf(stderr, "nbd\t%-24s" fmt, __func__, ##__VA_ARGS__)
#else
#define logout(fmt, ...) ((void)0)
#endif
#define MAX_NBD_REQUESTS 16
#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs))
#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs))
typedef struct BDRVNBDState {
NbdClientSession client;
QemuOpts *socket_opts;
int sock;
uint32_t nbdflags;
off_t size;
size_t blocksize;
CoMutex send_mutex;
CoMutex free_sema;
Coroutine *send_coroutine;
int in_flight;
Coroutine *recv_coroutine[MAX_NBD_REQUESTS];
struct nbd_reply reply;
int is_unix;
char *host_spec;
char *export_name; /* An NBD server may export several devices */
} BDRVNBDState;
static int nbd_parse_uri(const char *filename, QDict *options)
static int nbd_parse_uri(BDRVNBDState *s, const char *filename)
{
URI *uri;
const char *p;
QueryParams *qp = NULL;
int ret = 0;
bool is_unix;
uri = uri_parse(filename);
if (!uri) {
@@ -59,11 +84,11 @@ static int nbd_parse_uri(const char *filename, QDict *options)
/* transport */
if (!strcmp(uri->scheme, "nbd")) {
is_unix = false;
s->is_unix = false;
} else if (!strcmp(uri->scheme, "nbd+tcp")) {
is_unix = false;
s->is_unix = false;
} else if (!strcmp(uri->scheme, "nbd+unix")) {
is_unix = true;
s->is_unix = true;
} else {
ret = -EINVAL;
goto out;
@@ -72,44 +97,32 @@ static int nbd_parse_uri(const char *filename, QDict *options)
p = uri->path ? uri->path : "/";
p += strspn(p, "/");
if (p[0]) {
qdict_put(options, "export", qstring_from_str(p));
s->export_name = g_strdup(p);
}
qp = query_params_parse(uri->query);
if (qp->n > 1 || (is_unix && !qp->n) || (!is_unix && qp->n)) {
if (qp->n > 1 || (s->is_unix && !qp->n) || (!s->is_unix && qp->n)) {
ret = -EINVAL;
goto out;
}
if (is_unix) {
if (s->is_unix) {
/* nbd+unix:///export?socket=path */
if (uri->server || uri->port || strcmp(qp->p[0].name, "socket")) {
ret = -EINVAL;
goto out;
}
qdict_put(options, "path", qstring_from_str(qp->p[0].value));
s->host_spec = g_strdup(qp->p[0].value);
} else {
QString *host;
/* nbd[+tcp]://host[:port]/export */
/* nbd[+tcp]://host:port/export */
if (!uri->server) {
ret = -EINVAL;
goto out;
}
/* strip braces from literal IPv6 address */
if (uri->server[0] == '[') {
host = qstring_from_substr(uri->server, 1,
strlen(uri->server) - 2);
} else {
host = qstring_from_str(uri->server);
}
qdict_put(options, "host", host);
if (uri->port) {
char* port_str = g_strdup_printf("%d", uri->port);
qdict_put(options, "port", qstring_from_str(port_str));
g_free(port_str);
if (!uri->port) {
uri->port = NBD_DEFAULT_PORT;
}
s->host_spec = g_strdup_printf("%s:%d", uri->server, uri->port);
}
out:
@@ -120,29 +133,16 @@ out:
return ret;
}
static void nbd_parse_filename(const char *filename, QDict *options,
Error **errp)
static int nbd_config(BDRVNBDState *s, const char *filename)
{
char *file;
char *export_name;
const char *host_spec;
const char *unixpath;
if (qdict_haskey(options, "host")
|| qdict_haskey(options, "port")
|| qdict_haskey(options, "path"))
{
error_setg(errp, "host/port/path and a file name may not be specified "
"at the same time");
return;
}
int err = -EINVAL;
if (strstr(filename, "://")) {
int ret = nbd_parse_uri(filename, options);
if (ret < 0) {
error_setg(errp, "No valid URL specified");
}
return;
return nbd_parse_uri(s, filename);
}
file = g_strdup(filename);
@@ -154,86 +154,183 @@ static void nbd_parse_filename(const char *filename, QDict *options,
}
export_name[0] = 0; /* truncate 'file' */
export_name += strlen(EN_OPTSTR);
qdict_put(options, "export", qstring_from_str(export_name));
s->export_name = g_strdup(export_name);
}
/* extract the host_spec - fail if it's not nbd:... */
if (!strstart(file, "nbd:", &host_spec)) {
error_setg(errp, "File name string for NBD must start with 'nbd:'");
goto out;
}
if (!*host_spec) {
goto out;
}
/* are we a UNIX or TCP socket? */
if (strstart(host_spec, "unix:", &unixpath)) {
qdict_put(options, "path", qstring_from_str(unixpath));
s->is_unix = true;
s->host_spec = g_strdup(unixpath);
} else {
InetSocketAddress *addr = NULL;
addr = inet_parse(host_spec, errp);
if (!addr) {
goto out;
}
qdict_put(options, "host", qstring_from_str(addr->host));
qdict_put(options, "port", qstring_from_str(addr->port));
qapi_free_InetSocketAddress(addr);
s->is_unix = false;
s->host_spec = g_strdup(host_spec);
}
err = 0;
out:
g_free(file);
if (err != 0) {
g_free(s->export_name);
g_free(s->host_spec);
}
return err;
}
static void nbd_config(BDRVNBDState *s, QDict *options, char **export,
Error **errp)
static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request)
{
Error *local_err = NULL;
int i;
if (qdict_haskey(options, "path") == qdict_haskey(options, "host")) {
if (qdict_haskey(options, "path")) {
error_setg(errp, "path and host may not be used at the same time.");
} else {
error_setg(errp, "one of path and host must be specified.");
/* Poor man semaphore. The free_sema is locked when no other request
* can be accepted, and unlocked after receiving one reply. */
if (s->in_flight >= MAX_NBD_REQUESTS - 1) {
qemu_co_mutex_lock(&s->free_sema);
assert(s->in_flight < MAX_NBD_REQUESTS);
}
s->in_flight++;
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i] == NULL) {
s->recv_coroutine[i] = qemu_coroutine_self();
break;
}
}
assert(i < MAX_NBD_REQUESTS);
request->handle = INDEX_TO_HANDLE(s, i);
}
static int nbd_have_request(void *opaque)
{
BDRVNBDState *s = opaque;
return s->in_flight > 0;
}
static void nbd_reply_ready(void *opaque)
{
BDRVNBDState *s = opaque;
uint64_t i;
int ret;
if (s->reply.handle == 0) {
/* No reply already in flight. Fetch a header. It is possible
* that another thread has done the same thing in parallel, so
* the socket is not readable anymore.
*/
ret = nbd_receive_reply(s->sock, &s->reply);
if (ret == -EAGAIN) {
return;
}
if (ret < 0) {
s->reply.handle = 0;
goto fail;
}
}
/* There's no need for a mutex on the receive side, because the
* handler acts as a synchronization point and ensures that only
* one coroutine is called until the reply finishes. */
i = HANDLE_TO_INDEX(s, s->reply.handle);
if (i >= MAX_NBD_REQUESTS) {
goto fail;
}
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
return;
}
s->client.is_unix = qdict_haskey(options, "path");
s->socket_opts = qemu_opts_create(&socket_optslist, NULL, 0,
&error_abort);
qemu_opts_absorb_qdict(s->socket_opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return;
}
if (!qemu_opt_get(s->socket_opts, "port")) {
qemu_opt_set_number(s->socket_opts, "port", NBD_DEFAULT_PORT);
}
*export = g_strdup(qdict_get_try_str(options, "export"));
if (*export) {
qdict_del(options, "export");
fail:
for (i = 0; i < MAX_NBD_REQUESTS; i++) {
if (s->recv_coroutine[i]) {
qemu_coroutine_enter(s->recv_coroutine[i], NULL);
}
}
}
static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
static void nbd_restart_write(void *opaque)
{
BDRVNBDState *s = opaque;
qemu_coroutine_enter(s->send_coroutine, NULL);
}
static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request,
QEMUIOVector *qiov, int offset)
{
int rc, ret;
qemu_co_mutex_lock(&s->send_mutex);
s->send_coroutine = qemu_coroutine_self();
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write,
nbd_have_request, s);
rc = nbd_send_request(s->sock, request);
if (rc >= 0 && qiov) {
ret = qemu_co_sendv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
return -EIO;
}
}
qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->send_coroutine = NULL;
qemu_co_mutex_unlock(&s->send_mutex);
return rc;
}
static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request,
struct nbd_reply *reply,
QEMUIOVector *qiov, int offset)
{
int ret;
/* Wait until we're woken up by the read handler. TODO: perhaps
* peek at the next reply and avoid yielding if it's ours? */
qemu_coroutine_yield();
*reply = s->reply;
if (reply->handle != request->handle) {
reply->error = EIO;
} else {
if (qiov && reply->error == 0) {
ret = qemu_co_recvv(s->sock, qiov->iov, qiov->niov,
offset, request->len);
if (ret != request->len) {
reply->error = EIO;
}
}
/* Tell the read handler to read another header. */
s->reply.handle = 0;
}
}
static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request)
{
int i = HANDLE_TO_INDEX(s, request->handle);
s->recv_coroutine[i] = NULL;
if (s->in_flight-- == MAX_NBD_REQUESTS) {
qemu_co_mutex_unlock(&s->free_sema);
}
}
static int nbd_establish_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
int sock;
int ret;
off_t size;
size_t blocksize;
if (s->client.is_unix) {
sock = unix_connect_opts(s->socket_opts, errp, NULL, NULL);
if (s->is_unix) {
sock = unix_socket_outgoing(s->host_spec);
} else {
sock = inet_connect_opts(s->socket_opts, errp, NULL, NULL);
if (sock >= 0) {
socket_set_nodelay(sock);
}
sock = tcp_socket_outgoing_spec(s->host_spec);
}
/* Failed to establish connection */
@@ -242,92 +339,232 @@ static int nbd_establish_connection(BlockDriverState *bs, Error **errp)
return -errno;
}
return sock;
/* NBD handshake */
ret = nbd_receive_negotiate(sock, s->export_name, &s->nbdflags, &size,
&blocksize);
if (ret < 0) {
logout("Failed to negotiate with the NBD server\n");
closesocket(sock);
return ret;
}
/* Now that we're connected, set the socket to be non-blocking and
* kick the reply mechanism. */
socket_set_nonblock(sock);
qemu_aio_set_fd_handler(sock, nbd_reply_ready, NULL,
nbd_have_request, s);
s->sock = sock;
s->size = size;
s->blocksize = blocksize;
logout("Established connection with NBD server\n");
return 0;
}
static int nbd_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static void nbd_teardown_connection(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
char *export = NULL;
int result, sock;
Error *local_err = NULL;
struct nbd_request request;
request.type = NBD_CMD_DISC;
request.from = 0;
request.len = 0;
nbd_send_request(s->sock, &request);
qemu_aio_set_fd_handler(s->sock, NULL, NULL, NULL, NULL);
closesocket(s->sock);
}
static int nbd_open(BlockDriverState *bs, const char* filename, int flags)
{
BDRVNBDState *s = bs->opaque;
int result;
qemu_co_mutex_init(&s->send_mutex);
qemu_co_mutex_init(&s->free_sema);
/* Pop the config into our state object. Exit if invalid. */
nbd_config(s, options, &export, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
result = nbd_config(s, filename);
if (result != 0) {
return result;
}
/* establish TCP connection, return error if it fails
* TODO: Configurable retry-until-timeout behaviour.
*/
sock = nbd_establish_connection(bs, errp);
if (sock < 0) {
return sock;
}
result = nbd_establish_connection(bs);
/* NBD handshake */
result = nbd_client_session_init(&s->client, bs, sock, export);
g_free(export);
return result;
}
static int nbd_co_readv_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_READ;
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, qiov, offset);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_writev_1(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov,
int offset)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
request.type = NBD_CMD_WRITE;
if (!bdrv_enable_write_cache(bs) && (s->nbdflags & NBD_FLAG_SEND_FUA)) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = sector_num * 512;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, qiov, offset);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
/* qemu-nbd has a limit of slightly less than 1M per request. Try to
* remain aligned to 4K. */
#define NBD_MAX_SECTORS 2040
static int nbd_co_readv(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_readv(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_readv_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_readv_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_writev(BlockDriverState *bs, int64_t sector_num,
int nb_sectors, QEMUIOVector *qiov)
{
BDRVNBDState *s = bs->opaque;
return nbd_client_session_co_writev(&s->client, sector_num,
nb_sectors, qiov);
int offset = 0;
int ret;
while (nb_sectors > NBD_MAX_SECTORS) {
ret = nbd_co_writev_1(bs, sector_num, NBD_MAX_SECTORS, qiov, offset);
if (ret < 0) {
return ret;
}
offset += NBD_MAX_SECTORS * 512;
sector_num += NBD_MAX_SECTORS;
nb_sectors -= NBD_MAX_SECTORS;
}
return nbd_co_writev_1(bs, sector_num, nb_sectors, qiov, offset);
}
static int nbd_co_flush(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_flush(&s->client);
if (!(s->nbdflags & NBD_FLAG_SEND_FLUSH)) {
return 0;
}
request.type = NBD_CMD_FLUSH;
if (s->nbdflags & NBD_FLAG_SEND_FUA) {
request.type |= NBD_CMD_FLAG_FUA;
}
request.from = 0;
request.len = 0;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static int nbd_co_discard(BlockDriverState *bs, int64_t sector_num,
int nb_sectors)
{
BDRVNBDState *s = bs->opaque;
struct nbd_request request;
struct nbd_reply reply;
ssize_t ret;
return nbd_client_session_co_discard(&s->client, sector_num,
nb_sectors);
if (!(s->nbdflags & NBD_FLAG_SEND_TRIM)) {
return 0;
}
request.type = NBD_CMD_TRIM;
request.from = sector_num * 512;;
request.len = nb_sectors * 512;
nbd_coroutine_start(s, &request);
ret = nbd_co_send_request(s, &request, NULL, 0);
if (ret < 0) {
reply.error = -ret;
} else {
nbd_co_receive_reply(s, &request, &reply, NULL, 0);
}
nbd_coroutine_end(s, &request);
return -reply.error;
}
static void nbd_close(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
g_free(s->export_name);
g_free(s->host_spec);
qemu_opts_del(s->socket_opts);
nbd_client_session_close(&s->client);
nbd_teardown_connection(bs);
}
static int64_t nbd_getlength(BlockDriverState *bs)
{
BDRVNBDState *s = bs->opaque;
return s->client.size;
return s->size;
}
static BlockDriver bdrv_nbd = {
.format_name = "nbd",
.protocol_name = "nbd",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
@@ -341,7 +578,6 @@ static BlockDriver bdrv_nbd_tcp = {
.format_name = "nbd",
.protocol_name = "nbd+tcp",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,
@@ -355,7 +591,6 @@ static BlockDriver bdrv_nbd_unix = {
.format_name = "nbd",
.protocol_name = "nbd+unix",
.instance_size = sizeof(BDRVNBDState),
.bdrv_parse_filename = nbd_parse_filename,
.bdrv_file_open = nbd_open,
.bdrv_co_readv = nbd_co_readv,
.bdrv_co_writev = nbd_co_writev,

View File

@@ -1,446 +0,0 @@
/*
* QEMU Block driver for native access to files on NFS shares
*
* Copyright (c) 2014 Peter Lieven <pl@kamp.de>
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "config-host.h"
#include <poll.h>
#include "qemu-common.h"
#include "qemu/config-file.h"
#include "qemu/error-report.h"
#include "block/block_int.h"
#include "trace.h"
#include "qemu/iov.h"
#include "qemu/uri.h"
#include "sysemu/sysemu.h"
#include <nfsc/libnfs.h>
typedef struct NFSClient {
struct nfs_context *context;
struct nfsfh *fh;
int events;
bool has_zero_init;
} NFSClient;
typedef struct NFSRPC {
int ret;
int complete;
QEMUIOVector *iov;
struct stat *st;
Coroutine *co;
QEMUBH *bh;
} NFSRPC;
static void nfs_process_read(void *arg);
static void nfs_process_write(void *arg);
static void nfs_set_events(NFSClient *client)
{
int ev = nfs_which_events(client->context);
if (ev != client->events) {
qemu_aio_set_fd_handler(nfs_get_fd(client->context),
(ev & POLLIN) ? nfs_process_read : NULL,
(ev & POLLOUT) ? nfs_process_write : NULL,
client);
}
client->events = ev;
}
static void nfs_process_read(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLIN);
nfs_set_events(client);
}
static void nfs_process_write(void *arg)
{
NFSClient *client = arg;
nfs_service(client->context, POLLOUT);
nfs_set_events(client);
}
static void nfs_co_init_task(NFSClient *client, NFSRPC *task)
{
*task = (NFSRPC) {
.co = qemu_coroutine_self(),
};
}
static void nfs_co_generic_bh_cb(void *opaque)
{
NFSRPC *task = opaque;
qemu_bh_delete(task->bh);
qemu_coroutine_enter(task->co, NULL);
}
static void
nfs_co_generic_cb(int ret, struct nfs_context *nfs, void *data,
void *private_data)
{
NFSRPC *task = private_data;
task->complete = 1;
task->ret = ret;
if (task->ret > 0 && task->iov) {
if (task->ret <= task->iov->size) {
qemu_iovec_from_buf(task->iov, 0, data, task->ret);
} else {
task->ret = -EIO;
}
}
if (task->ret == 0 && task->st) {
memcpy(task->st, data, sizeof(struct stat));
}
if (task->ret < 0) {
error_report("NFS Error: %s", nfs_get_error(nfs));
}
if (task->co) {
task->bh = qemu_bh_new(nfs_co_generic_bh_cb, task);
qemu_bh_schedule(task->bh);
}
}
static int coroutine_fn nfs_co_readv(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
task.iov = iov;
if (nfs_pread_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
nfs_co_generic_cb, &task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
if (task.ret < 0) {
return task.ret;
}
/* zero pad short reads */
if (task.ret < iov->size) {
qemu_iovec_memset(iov, task.ret, 0, iov->size - task.ret);
}
return 0;
}
static int coroutine_fn nfs_co_writev(BlockDriverState *bs,
int64_t sector_num, int nb_sectors,
QEMUIOVector *iov)
{
NFSClient *client = bs->opaque;
NFSRPC task;
char *buf = NULL;
nfs_co_init_task(client, &task);
buf = g_malloc(nb_sectors * BDRV_SECTOR_SIZE);
qemu_iovec_to_buf(iov, 0, buf, nb_sectors * BDRV_SECTOR_SIZE);
if (nfs_pwrite_async(client->context, client->fh,
sector_num * BDRV_SECTOR_SIZE,
nb_sectors * BDRV_SECTOR_SIZE,
buf, nfs_co_generic_cb, &task) != 0) {
g_free(buf);
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
g_free(buf);
if (task.ret != nb_sectors * BDRV_SECTOR_SIZE) {
return task.ret < 0 ? task.ret : -EIO;
}
return 0;
}
static int coroutine_fn nfs_co_flush(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task;
nfs_co_init_task(client, &task);
if (nfs_fsync_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_coroutine_yield();
}
return task.ret;
}
/* TODO Convert to fine grained options */
static QemuOptsList runtime_opts = {
.name = "nfs",
.head = QTAILQ_HEAD_INITIALIZER(runtime_opts.head),
.desc = {
{
.name = "filename",
.type = QEMU_OPT_STRING,
.help = "URL to the NFS file",
},
{ /* end of list */ }
},
};
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
if (client->fh) {
nfs_close(client->context, client->fh);
}
qemu_aio_set_fd_handler(nfs_get_fd(client->context), NULL, NULL, NULL);
nfs_destroy_context(client->context);
}
memset(client, 0, sizeof(NFSClient));
}
static void nfs_file_close(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
nfs_client_close(client);
}
static int64_t nfs_client_open(NFSClient *client, const char *filename,
int flags, Error **errp)
{
int ret = -EINVAL, i;
struct stat st;
URI *uri;
QueryParams *qp = NULL;
char *file = NULL, *strp = NULL;
uri = uri_parse(filename);
if (!uri) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
if (!uri->server) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
strp = strrchr(uri->path, '/');
if (strp == NULL) {
error_setg(errp, "Invalid URL specified");
goto fail;
}
file = g_strdup(strp);
*strp = 0;
client->context = nfs_init_context();
if (client->context == NULL) {
error_setg(errp, "Failed to init NFS context");
goto fail;
}
qp = query_params_parse(uri->query);
for (i = 0; i < qp->n; i++) {
if (!qp->p[i].value) {
error_setg(errp, "Value for NFS parameter expected: %s",
qp->p[i].name);
goto fail;
}
if (!strncmp(qp->p[i].name, "uid", 3)) {
nfs_set_uid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "gid", 3)) {
nfs_set_gid(client->context, atoi(qp->p[i].value));
} else if (!strncmp(qp->p[i].name, "tcp-syncnt", 10)) {
nfs_set_tcp_syncnt(client->context, atoi(qp->p[i].value));
} else {
error_setg(errp, "Unknown NFS parameter name: %s",
qp->p[i].name);
goto fail;
}
}
ret = nfs_mount(client->context, uri->server, uri->path);
if (ret < 0) {
error_setg(errp, "Failed to mount nfs share: %s",
nfs_get_error(client->context));
goto fail;
}
if (flags & O_CREAT) {
ret = nfs_creat(client->context, file, 0600, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to create file: %s",
nfs_get_error(client->context));
goto fail;
}
} else {
ret = nfs_open(client->context, file, flags, &client->fh);
if (ret < 0) {
error_setg(errp, "Failed to open file : %s",
nfs_get_error(client->context));
goto fail;
}
}
ret = nfs_fstat(client->context, client->fh, &st);
if (ret < 0) {
error_setg(errp, "Failed to fstat file: %s",
nfs_get_error(client->context));
goto fail;
}
ret = DIV_ROUND_UP(st.st_size, BDRV_SECTOR_SIZE);
client->has_zero_init = S_ISREG(st.st_mode);
goto out;
fail:
nfs_client_close(client);
out:
if (qp) {
query_params_free(qp);
}
uri_free(uri);
g_free(file);
return ret;
}
static int nfs_file_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp) {
NFSClient *client = bs->opaque;
int64_t ret;
QemuOpts *opts;
Error *local_err = NULL;
opts = qemu_opts_create(&runtime_opts, NULL, 0, &error_abort);
qemu_opts_absorb_qdict(opts, options, &local_err);
if (local_err) {
error_propagate(errp, local_err);
return -EINVAL;
}
ret = nfs_client_open(client, qemu_opt_get(opts, "filename"),
(flags & BDRV_O_RDWR) ? O_RDWR : O_RDONLY,
errp);
if (ret < 0) {
return ret;
}
bs->total_sectors = ret;
return 0;
}
static int nfs_file_create(const char *url, QEMUOptionParameter *options,
Error **errp)
{
int ret = 0;
int64_t total_size = 0;
NFSClient *client = g_malloc0(sizeof(NFSClient));
/* Read out options */
while (options && options->name) {
if (!strcmp(options->name, "size")) {
total_size = options->value.n;
}
options++;
}
ret = nfs_client_open(client, url, O_CREAT, errp);
if (ret < 0) {
goto out;
}
ret = nfs_ftruncate(client->context, client->fh, total_size);
nfs_client_close(client);
out:
g_free(client);
return ret;
}
static int nfs_has_zero_init(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
return client->has_zero_init;
}
static int64_t nfs_get_allocated_file_size(BlockDriverState *bs)
{
NFSClient *client = bs->opaque;
NFSRPC task = {0};
struct stat st;
task.st = &st;
if (nfs_fstat_async(client->context, client->fh, nfs_co_generic_cb,
&task) != 0) {
return -ENOMEM;
}
while (!task.complete) {
nfs_set_events(client);
qemu_aio_wait();
}
return (task.ret < 0 ? task.ret : st.st_blocks * st.st_blksize);
}
static int nfs_file_truncate(BlockDriverState *bs, int64_t offset)
{
NFSClient *client = bs->opaque;
return nfs_ftruncate(client->context, client->fh, offset);
}
static BlockDriver bdrv_nfs = {
.format_name = "nfs",
.protocol_name = "nfs",
.instance_size = sizeof(NFSClient),
.bdrv_needs_filename = true,
.bdrv_has_zero_init = nfs_has_zero_init,
.bdrv_get_allocated_file_size = nfs_get_allocated_file_size,
.bdrv_truncate = nfs_file_truncate,
.bdrv_file_open = nfs_file_open,
.bdrv_close = nfs_file_close,
.bdrv_create = nfs_file_create,
.bdrv_co_readv = nfs_co_readv,
.bdrv_co_writev = nfs_co_writev,
.bdrv_co_flush_to_disk = nfs_co_flush,
};
static void nfs_block_init(void)
{
bdrv_register(&bdrv_nfs);
}
block_init(nfs_block_init);

View File

@@ -24,8 +24,8 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
/**************************************************************/
@@ -49,9 +49,9 @@ typedef struct BDRVParallelsState {
CoMutex lock;
uint32_t *catalog_bitmap;
unsigned int catalog_size;
int catalog_size;
unsigned int tracks;
int tracks;
} BDRVParallelsState;
static int parallels_probe(const uint8_t *buf, int buf_size, const char *filename)
@@ -68,59 +68,40 @@ static int parallels_probe(const uint8_t *buf, int buf_size, const char *filenam
return 0;
}
static int parallels_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int parallels_open(BlockDriverState *bs, int flags)
{
BDRVParallelsState *s = bs->opaque;
int i;
struct parallels_header ph;
int ret;
bs->read_only = 1; // no write support yet
ret = bdrv_pread(bs->file, 0, &ph, sizeof(ph));
if (ret < 0) {
if (bdrv_pread(bs->file, 0, &ph, sizeof(ph)) != sizeof(ph))
goto fail;
}
if (memcmp(ph.magic, HEADER_MAGIC, 16) ||
(le32_to_cpu(ph.version) != HEADER_VERSION)) {
error_setg(errp, "Image not in Parallels format");
ret = -EINVAL;
(le32_to_cpu(ph.version) != HEADER_VERSION)) {
goto fail;
}
bs->total_sectors = le32_to_cpu(ph.nb_sectors);
s->tracks = le32_to_cpu(ph.tracks);
if (s->tracks == 0) {
error_setg(errp, "Invalid image: Zero sectors per track");
ret = -EINVAL;
goto fail;
}
s->catalog_size = le32_to_cpu(ph.catalog_entries);
if (s->catalog_size > INT_MAX / 4) {
error_setg(errp, "Catalog too large");
ret = -EFBIG;
goto fail;
}
s->catalog_bitmap = g_malloc(s->catalog_size * 4);
ret = bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4);
if (ret < 0) {
goto fail;
}
if (bdrv_pread(bs->file, 64, s->catalog_bitmap, s->catalog_size * 4) !=
s->catalog_size * 4)
goto fail;
for (i = 0; i < s->catalog_size; i++)
le32_to_cpus(&s->catalog_bitmap[i]);
qemu_co_mutex_init(&s->lock);
return 0;
fail:
g_free(s->catalog_bitmap);
return ret;
if (s->catalog_bitmap)
g_free(s->catalog_bitmap);
return -1;
}
static int64_t seek_to_sector(BlockDriverState *bs, int64_t sector_num)

View File

@@ -1,622 +0,0 @@
/*
* Block layer qmp and info dump related functions
*
* Copyright (c) 2003-2008 Fabrice Bellard
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
#include "block/qapi.h"
#include "block/block_int.h"
#include "qmp-commands.h"
#include "qapi-visit.h"
#include "qapi/qmp-output-visitor.h"
#include "qapi/qmp/types.h"
BlockDeviceInfo *bdrv_block_device_info(BlockDriverState *bs)
{
BlockDeviceInfo *info = g_malloc0(sizeof(*info));
info->file = g_strdup(bs->filename);
info->ro = bs->read_only;
info->drv = g_strdup(bs->drv->format_name);
info->encrypted = bs->encrypted;
info->encryption_key_missing = bdrv_key_required(bs);
if (bs->node_name[0]) {
info->has_node_name = true;
info->node_name = g_strdup(bs->node_name);
}
if (bs->backing_file[0]) {
info->has_backing_file = true;
info->backing_file = g_strdup(bs->backing_file);
}
info->backing_file_depth = bdrv_get_backing_file_depth(bs);
if (bs->io_limits_enabled) {
ThrottleConfig cfg;
throttle_get_config(&bs->throttle_state, &cfg);
info->bps = cfg.buckets[THROTTLE_BPS_TOTAL].avg;
info->bps_rd = cfg.buckets[THROTTLE_BPS_READ].avg;
info->bps_wr = cfg.buckets[THROTTLE_BPS_WRITE].avg;
info->iops = cfg.buckets[THROTTLE_OPS_TOTAL].avg;
info->iops_rd = cfg.buckets[THROTTLE_OPS_READ].avg;
info->iops_wr = cfg.buckets[THROTTLE_OPS_WRITE].avg;
info->has_bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->bps_max = cfg.buckets[THROTTLE_BPS_TOTAL].max;
info->has_bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->bps_rd_max = cfg.buckets[THROTTLE_BPS_READ].max;
info->has_bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->bps_wr_max = cfg.buckets[THROTTLE_BPS_WRITE].max;
info->has_iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->iops_max = cfg.buckets[THROTTLE_OPS_TOTAL].max;
info->has_iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->iops_rd_max = cfg.buckets[THROTTLE_OPS_READ].max;
info->has_iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->iops_wr_max = cfg.buckets[THROTTLE_OPS_WRITE].max;
info->has_iops_size = cfg.op_size;
info->iops_size = cfg.op_size;
}
return info;
}
/*
* Returns 0 on success, with *p_list either set to describe snapshot
* information, or NULL because there are no snapshots. Returns -errno on
* error, with *p_list untouched.
*/
int bdrv_query_snapshot_info_list(BlockDriverState *bs,
SnapshotInfoList **p_list,
Error **errp)
{
int i, sn_count;
QEMUSnapshotInfo *sn_tab = NULL;
SnapshotInfoList *info_list, *cur_item = NULL, *head = NULL;
SnapshotInfo *info;
sn_count = bdrv_snapshot_list(bs, &sn_tab);
if (sn_count < 0) {
const char *dev = bdrv_get_device_name(bs);
switch (sn_count) {
case -ENOMEDIUM:
error_setg(errp, "Device '%s' is not inserted", dev);
break;
case -ENOTSUP:
error_setg(errp,
"Device '%s' does not support internal snapshots",
dev);
break;
default:
error_setg_errno(errp, -sn_count,
"Can't list snapshots of device '%s'", dev);
break;
}
return sn_count;
}
for (i = 0; i < sn_count; i++) {
info = g_new0(SnapshotInfo, 1);
info->id = g_strdup(sn_tab[i].id_str);
info->name = g_strdup(sn_tab[i].name);
info->vm_state_size = sn_tab[i].vm_state_size;
info->date_sec = sn_tab[i].date_sec;
info->date_nsec = sn_tab[i].date_nsec;
info->vm_clock_sec = sn_tab[i].vm_clock_nsec / 1000000000;
info->vm_clock_nsec = sn_tab[i].vm_clock_nsec % 1000000000;
info_list = g_new0(SnapshotInfoList, 1);
info_list->value = info;
/* XXX: waiting for the qapi to support qemu-queue.h types */
if (!cur_item) {
head = cur_item = info_list;
} else {
cur_item->next = info_list;
cur_item = info_list;
}
}
g_free(sn_tab);
*p_list = head;
return 0;
}
/**
* bdrv_query_image_info:
* @bs: block device to examine
* @p_info: location to store image information
* @errp: location to store error information
*
* Store "flat" image information in @p_info.
*
* "Flat" means it does *not* query backing image information,
* i.e. (*pinfo)->has_backing_image will be set to false and
* (*pinfo)->backing_image to NULL even when the image does in fact have
* a backing image.
*
* @p_info will be set only on success. On error, store error in @errp.
*/
void bdrv_query_image_info(BlockDriverState *bs,
ImageInfo **p_info,
Error **errp)
{
uint64_t total_sectors;
const char *backing_filename;
char backing_filename2[1024];
BlockDriverInfo bdi;
int ret;
Error *err = NULL;
ImageInfo *info = g_new0(ImageInfo, 1);
bdrv_get_geometry(bs, &total_sectors);
info->filename = g_strdup(bs->filename);
info->format = g_strdup(bdrv_get_format_name(bs));
info->virtual_size = total_sectors * 512;
info->actual_size = bdrv_get_allocated_file_size(bs);
info->has_actual_size = info->actual_size >= 0;
if (bdrv_is_encrypted(bs)) {
info->encrypted = true;
info->has_encrypted = true;
}
if (bdrv_get_info(bs, &bdi) >= 0) {
if (bdi.cluster_size != 0) {
info->cluster_size = bdi.cluster_size;
info->has_cluster_size = true;
}
info->dirty_flag = bdi.is_dirty;
info->has_dirty_flag = true;
}
info->format_specific = bdrv_get_specific_info(bs);
info->has_format_specific = info->format_specific != NULL;
backing_filename = bs->backing_file;
if (backing_filename[0] != '\0') {
info->backing_filename = g_strdup(backing_filename);
info->has_backing_filename = true;
bdrv_get_full_backing_filename(bs, backing_filename2,
sizeof(backing_filename2));
if (strcmp(backing_filename, backing_filename2) != 0) {
info->full_backing_filename =
g_strdup(backing_filename2);
info->has_full_backing_filename = true;
}
if (bs->backing_format[0]) {
info->backing_filename_format = g_strdup(bs->backing_format);
info->has_backing_filename_format = true;
}
}
ret = bdrv_query_snapshot_info_list(bs, &info->snapshots, &err);
switch (ret) {
case 0:
if (info->snapshots) {
info->has_snapshots = true;
}
break;
/* recoverable error */
case -ENOMEDIUM:
case -ENOTSUP:
error_free(err);
break;
default:
error_propagate(errp, err);
qapi_free_ImageInfo(info);
return;
}
*p_info = info;
}
/* @p_info will be set only on success. */
void bdrv_query_info(BlockDriverState *bs,
BlockInfo **p_info,
Error **errp)
{
BlockInfo *info = g_malloc0(sizeof(*info));
BlockDriverState *bs0;
ImageInfo **p_image_info;
Error *local_err = NULL;
info->device = g_strdup(bs->device_name);
info->type = g_strdup("unknown");
info->locked = bdrv_dev_is_medium_locked(bs);
info->removable = bdrv_dev_has_removable_media(bs);
if (bdrv_dev_has_removable_media(bs)) {
info->has_tray_open = true;
info->tray_open = bdrv_dev_is_tray_open(bs);
}
if (bdrv_iostatus_is_enabled(bs)) {
info->has_io_status = true;
info->io_status = bs->iostatus;
}
if (!QLIST_EMPTY(&bs->dirty_bitmaps)) {
info->has_dirty_bitmaps = true;
info->dirty_bitmaps = bdrv_query_dirty_bitmaps(bs);
}
if (bs->drv) {
info->has_inserted = true;
info->inserted = bdrv_block_device_info(bs);
bs0 = bs;
p_image_info = &info->inserted->image;
while (1) {
bdrv_query_image_info(bs0, p_image_info, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
if (bs0->drv && bs0->backing_hd) {
bs0 = bs0->backing_hd;
(*p_image_info)->has_backing_image = true;
p_image_info = &((*p_image_info)->backing_image);
} else {
break;
}
}
}
*p_info = info;
return;
err:
qapi_free_BlockInfo(info);
}
BlockStats *bdrv_query_stats(const BlockDriverState *bs)
{
BlockStats *s;
s = g_malloc0(sizeof(*s));
if (bs->device_name[0]) {
s->has_device = true;
s->device = g_strdup(bs->device_name);
}
s->stats = g_malloc0(sizeof(*s->stats));
s->stats->rd_bytes = bs->nr_bytes[BDRV_ACCT_READ];
s->stats->wr_bytes = bs->nr_bytes[BDRV_ACCT_WRITE];
s->stats->rd_operations = bs->nr_ops[BDRV_ACCT_READ];
s->stats->wr_operations = bs->nr_ops[BDRV_ACCT_WRITE];
s->stats->wr_highest_offset = bs->wr_highest_sector * BDRV_SECTOR_SIZE;
s->stats->flush_operations = bs->nr_ops[BDRV_ACCT_FLUSH];
s->stats->wr_total_time_ns = bs->total_time_ns[BDRV_ACCT_WRITE];
s->stats->rd_total_time_ns = bs->total_time_ns[BDRV_ACCT_READ];
s->stats->flush_total_time_ns = bs->total_time_ns[BDRV_ACCT_FLUSH];
if (bs->file) {
s->has_parent = true;
s->parent = bdrv_query_stats(bs->file);
}
if (bs->backing_hd) {
s->has_backing = true;
s->backing = bdrv_query_stats(bs->backing_hd);
}
return s;
}
BlockInfoList *qmp_query_block(Error **errp)
{
BlockInfoList *head = NULL, **p_next = &head;
BlockDriverState *bs = NULL;
Error *local_err = NULL;
while ((bs = bdrv_next(bs))) {
BlockInfoList *info = g_malloc0(sizeof(*info));
bdrv_query_info(bs, &info->value, &local_err);
if (local_err) {
error_propagate(errp, local_err);
goto err;
}
*p_next = info;
p_next = &info->next;
}
return head;
err:
qapi_free_BlockInfoList(head);
return NULL;
}
BlockStatsList *qmp_query_blockstats(Error **errp)
{
BlockStatsList *head = NULL, **p_next = &head;
BlockDriverState *bs = NULL;
while ((bs = bdrv_next(bs))) {
BlockStatsList *info = g_malloc0(sizeof(*info));
info->value = bdrv_query_stats(bs);
*p_next = info;
p_next = &info->next;
}
return head;
}
#define NB_SUFFIXES 4
static char *get_human_readable_size(char *buf, int buf_size, int64_t size)
{
static const char suffixes[NB_SUFFIXES] = "KMGT";
int64_t base;
int i;
if (size <= 999) {
snprintf(buf, buf_size, "%" PRId64, size);
} else {
base = 1024;
for (i = 0; i < NB_SUFFIXES; i++) {
if (size < (10 * base)) {
snprintf(buf, buf_size, "%0.1f%c",
(double)size / base,
suffixes[i]);
break;
} else if (size < (1000 * base) || i == (NB_SUFFIXES - 1)) {
snprintf(buf, buf_size, "%" PRId64 "%c",
((size + (base >> 1)) / base),
suffixes[i]);
break;
}
base = base * 1024;
}
}
return buf;
}
void bdrv_snapshot_dump(fprintf_function func_fprintf, void *f,
QEMUSnapshotInfo *sn)
{
char buf1[128], date_buf[128], clock_buf[128];
struct tm tm;
time_t ti;
int64_t secs;
if (!sn) {
func_fprintf(f,
"%-10s%-20s%7s%20s%15s",
"ID", "TAG", "VM SIZE", "DATE", "VM CLOCK");
} else {
ti = sn->date_sec;
localtime_r(&ti, &tm);
strftime(date_buf, sizeof(date_buf),
"%Y-%m-%d %H:%M:%S", &tm);
secs = sn->vm_clock_nsec / 1000000000;
snprintf(clock_buf, sizeof(clock_buf),
"%02d:%02d:%02d.%03d",
(int)(secs / 3600),
(int)((secs / 60) % 60),
(int)(secs % 60),
(int)((sn->vm_clock_nsec / 1000000) % 1000));
func_fprintf(f,
"%-10s%-20s%7s%20s%15s",
sn->id_str, sn->name,
get_human_readable_size(buf1, sizeof(buf1),
sn->vm_state_size),
date_buf,
clock_buf);
}
}
static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
QDict *dict);
static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
QList *list);
static void dump_qobject(fprintf_function func_fprintf, void *f,
int comp_indent, QObject *obj)
{
switch (qobject_type(obj)) {
case QTYPE_QINT: {
QInt *value = qobject_to_qint(obj);
func_fprintf(f, "%" PRId64, qint_get_int(value));
break;
}
case QTYPE_QSTRING: {
QString *value = qobject_to_qstring(obj);
func_fprintf(f, "%s", qstring_get_str(value));
break;
}
case QTYPE_QDICT: {
QDict *value = qobject_to_qdict(obj);
dump_qdict(func_fprintf, f, comp_indent, value);
break;
}
case QTYPE_QLIST: {
QList *value = qobject_to_qlist(obj);
dump_qlist(func_fprintf, f, comp_indent, value);
break;
}
case QTYPE_QFLOAT: {
QFloat *value = qobject_to_qfloat(obj);
func_fprintf(f, "%g", qfloat_get_double(value));
break;
}
case QTYPE_QBOOL: {
QBool *value = qobject_to_qbool(obj);
func_fprintf(f, "%s", qbool_get_int(value) ? "true" : "false");
break;
}
case QTYPE_QERROR: {
QString *value = qerror_human((QError *)obj);
func_fprintf(f, "%s", qstring_get_str(value));
break;
}
case QTYPE_NONE:
break;
case QTYPE_MAX:
default:
abort();
}
}
static void dump_qlist(fprintf_function func_fprintf, void *f, int indentation,
QList *list)
{
const QListEntry *entry;
int i = 0;
for (entry = qlist_first(list); entry; entry = qlist_next(entry), i++) {
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
const char *format = composite ? "%*s[%i]:\n" : "%*s[%i]: ";
func_fprintf(f, format, indentation * 4, "", i);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
}
}
static void dump_qdict(fprintf_function func_fprintf, void *f, int indentation,
QDict *dict)
{
const QDictEntry *entry;
for (entry = qdict_first(dict); entry; entry = qdict_next(dict, entry)) {
qtype_code type = qobject_type(entry->value);
bool composite = (type == QTYPE_QDICT || type == QTYPE_QLIST);
const char *format = composite ? "%*s%s:\n" : "%*s%s: ";
char key[strlen(entry->key) + 1];
int i;
/* replace dashes with spaces in key (variable) names */
for (i = 0; entry->key[i]; i++) {
key[i] = entry->key[i] == '-' ? ' ' : entry->key[i];
}
key[i] = 0;
func_fprintf(f, format, indentation * 4, "", key);
dump_qobject(func_fprintf, f, indentation + 1, entry->value);
if (!composite) {
func_fprintf(f, "\n");
}
}
}
void bdrv_image_info_specific_dump(fprintf_function func_fprintf, void *f,
ImageInfoSpecific *info_spec)
{
QmpOutputVisitor *ov = qmp_output_visitor_new();
QObject *obj, *data;
visit_type_ImageInfoSpecific(qmp_output_get_visitor(ov), &info_spec, NULL,
&error_abort);
obj = qmp_output_get_qobject(ov);
assert(qobject_type(obj) == QTYPE_QDICT);
data = qdict_get(qobject_to_qdict(obj), "data");
dump_qobject(func_fprintf, f, 1, data);
qmp_output_visitor_cleanup(ov);
}
void bdrv_image_info_dump(fprintf_function func_fprintf, void *f,
ImageInfo *info)
{
char size_buf[128], dsize_buf[128];
if (!info->has_actual_size) {
snprintf(dsize_buf, sizeof(dsize_buf), "unavailable");
} else {
get_human_readable_size(dsize_buf, sizeof(dsize_buf),
info->actual_size);
}
get_human_readable_size(size_buf, sizeof(size_buf), info->virtual_size);
func_fprintf(f,
"image: %s\n"
"file format: %s\n"
"virtual size: %s (%" PRId64 " bytes)\n"
"disk size: %s\n",
info->filename, info->format, size_buf,
info->virtual_size,
dsize_buf);
if (info->has_encrypted && info->encrypted) {
func_fprintf(f, "encrypted: yes\n");
}
if (info->has_cluster_size) {
func_fprintf(f, "cluster_size: %" PRId64 "\n",
info->cluster_size);
}
if (info->has_dirty_flag && info->dirty_flag) {
func_fprintf(f, "cleanly shut down: no\n");
}
if (info->has_backing_filename) {
func_fprintf(f, "backing file: %s", info->backing_filename);
if (info->has_full_backing_filename) {
func_fprintf(f, " (actual path: %s)", info->full_backing_filename);
}
func_fprintf(f, "\n");
if (info->has_backing_filename_format) {
func_fprintf(f, "backing file format: %s\n",
info->backing_filename_format);
}
}
if (info->has_snapshots) {
SnapshotInfoList *elem;
func_fprintf(f, "Snapshot list:\n");
bdrv_snapshot_dump(func_fprintf, f, NULL);
func_fprintf(f, "\n");
/* Ideally bdrv_snapshot_dump() would operate on SnapshotInfoList but
* we convert to the block layer's native QEMUSnapshotInfo for now.
*/
for (elem = info->snapshots; elem; elem = elem->next) {
QEMUSnapshotInfo sn = {
.vm_state_size = elem->value->vm_state_size,
.date_sec = elem->value->date_sec,
.date_nsec = elem->value->date_nsec,
.vm_clock_nsec = elem->value->vm_clock_sec * 1000000000ULL +
elem->value->vm_clock_nsec,
};
pstrcpy(sn.id_str, sizeof(sn.id_str), elem->value->id);
pstrcpy(sn.name, sizeof(sn.name), elem->value->name);
bdrv_snapshot_dump(func_fprintf, f, &sn);
func_fprintf(f, "\n");
}
}
if (info->has_format_specific) {
func_fprintf(f, "Format specific information:\n");
bdrv_image_info_specific_dump(func_fprintf, f, info->format_specific);
}
}

View File

@@ -22,11 +22,11 @@
* THE SOFTWARE.
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "qemu/module.h"
#include "block_int.h"
#include "module.h"
#include <zlib.h>
#include "qemu/aes.h"
#include "migration/migration.h"
#include "aes.h"
#include "migration.h"
/**************************************************************/
/* QEMU COW block driver with compression and encryption support */
@@ -92,8 +92,7 @@ static int qcow_probe(const uint8_t *buf, int buf_size, const char *filename)
return 0;
}
static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
Error **errp)
static int qcow_open(BlockDriverState *bs, int flags)
{
BDRVQcowState *s = bs->opaque;
int len, i, shift, ret;
@@ -113,27 +112,23 @@ static int qcow_open(BlockDriverState *bs, QDict *options, int flags,
be64_to_cpus(&header.l1_table_offset);
if (header.magic != QCOW_MAGIC) {
error_setg(errp, "Image not in qcow format");
ret = -EINVAL;
goto fail;
}
if (header.version != QCOW_VERSION) {
char version[64];
snprintf(version, sizeof(version), "QCOW version %" PRIu32,
header.version);
error_set(errp, QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
snprintf(version, sizeof(version), "QCOW version %d", header.version);
qerror_report(QERR_UNKNOWN_BLOCK_FORMAT_FEATURE,
bs->device_name, "qcow", version);
ret = -ENOTSUP;
goto fail;
}
if (header.size <= 1 || header.cluster_bits < 9) {
error_setg(errp, "invalid value in qcow header");
ret = -EINVAL;
goto fail;
}
if (header.crypt_method > QCOW_CRYPT_AES) {
error_setg(errp, "invalid encryption method in qcow header");
ret = -EINVAL;
goto fail;
}
@@ -400,7 +395,7 @@ static uint64_t get_cluster_offset(BlockDriverState *bs,
return cluster_offset;
}
static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
static int coroutine_fn qcow_co_is_allocated(BlockDriverState *bs,
int64_t sector_num, int nb_sectors, int *pnum)
{
BDRVQcowState *s = bs->opaque;
@@ -415,14 +410,7 @@ static int64_t coroutine_fn qcow_co_get_block_status(BlockDriverState *bs,
if (n > nb_sectors)
n = nb_sectors;
*pnum = n;
if (!cluster_offset) {
return 0;
}
if ((cluster_offset & QCOW_OFLAG_COMPRESSED) || s->crypt_method) {
return BDRV_BLOCK_DATA;
}
cluster_offset |= (index_in_cluster << BDRV_SECTOR_BITS);
return BDRV_BLOCK_DATA | BDRV_BLOCK_OFFSET_VALID | cluster_offset;
return (cluster_offset != 0);
}
static int decompress_buffer(uint8_t *out_buf, int out_buf_size,
@@ -663,8 +651,7 @@ static void qcow_close(BlockDriverState *bs)
error_free(s->migration_blocker);
}
static int qcow_create(const char *filename, QEMUOptionParameter *options,
Error **errp)
static int qcow_create(const char *filename, QEMUOptionParameter *options)
{
int header_size, backing_filename_len, l1_size, shift, i;
QCowHeader header;
@@ -672,7 +659,6 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
int64_t total_size = 0;
const char *backing_file = NULL;
int flags = 0;
Error *local_err = NULL;
int ret;
BlockDriverState *qcow_bs;
@@ -688,17 +674,13 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
options++;
}
ret = bdrv_create_file(filename, options, &local_err);
ret = bdrv_create_file(filename, options);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
qcow_bs = NULL;
ret = bdrv_open(&qcow_bs, filename, NULL, NULL,
BDRV_O_RDWR | BDRV_O_PROTOCOL, NULL, &local_err);
ret = bdrv_file_open(&qcow_bs, filename, BDRV_O_RDWR);
if (ret < 0) {
error_propagate(errp, local_err);
return ret;
}
@@ -724,7 +706,7 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
backing_file = NULL;
}
header.cluster_bits = 9; /* 512 byte cluster to avoid copying
unmodified sectors */
unmodifyed sectors */
header.l2_bits = 12; /* 32 KB L2 tables */
} else {
header.cluster_bits = 12; /* 4 KB clusters */
@@ -769,7 +751,7 @@ static int qcow_create(const char *filename, QEMUOptionParameter *options,
g_free(tmp);
ret = 0;
exit:
bdrv_unref(qcow_bs);
bdrv_delete(qcow_bs);
return ret;
}
@@ -805,21 +787,8 @@ static int qcow_write_compressed(BlockDriverState *bs, int64_t sector_num,
uint8_t *out_buf;
uint64_t cluster_offset;
if (nb_sectors != s->cluster_sectors) {
ret = -EINVAL;
/* Zero-pad last write if image size is not cluster aligned */
if (sector_num + nb_sectors == bs->total_sectors &&
nb_sectors < s->cluster_sectors) {
uint8_t *pad_buf = qemu_blockalign(bs, s->cluster_size);
memset(pad_buf, 0, s->cluster_size);
memcpy(pad_buf, buf, nb_sectors * BDRV_SECTOR_SIZE);
ret = qcow_write_compressed(bs, sector_num,
pad_buf, s->cluster_sectors);
qemu_vfree(pad_buf);
}
return ret;
}
if (nb_sectors != s->cluster_sectors)
return -EINVAL;
out_buf = g_malloc(s->cluster_size + (s->cluster_size / 1000) + 128);
@@ -910,11 +879,10 @@ static BlockDriver bdrv_qcow = {
.bdrv_close = qcow_close,
.bdrv_reopen_prepare = qcow_reopen_prepare,
.bdrv_create = qcow_create,
.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_co_readv = qcow_co_readv,
.bdrv_co_writev = qcow_co_writev,
.bdrv_co_get_block_status = qcow_co_get_block_status,
.bdrv_co_is_allocated = qcow_co_is_allocated,
.bdrv_set_key = qcow_set_key,
.bdrv_make_empty = qcow_make_empty,

View File

@@ -22,7 +22,7 @@
* THE SOFTWARE.
*/
#include "block/block_int.h"
#include "block_int.h"
#include "qemu-common.h"
#include "qcow2.h"
#include "trace.h"
@@ -114,21 +114,6 @@ static int qcow2_cache_entry_flush(BlockDriverState *bs, Qcow2Cache *c, int i)
return ret;
}
if (c == s->refcount_block_cache) {
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_REFCOUNT_BLOCK,
c->entries[i].offset, s->cluster_size);
} else if (c == s->l2_table_cache) {
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_ACTIVE_L2,
c->entries[i].offset, s->cluster_size);
} else {
ret = qcow2_pre_write_overlap_check(bs, 0,
c->entries[i].offset, s->cluster_size);
}
if (ret < 0) {
return ret;
}
if (c == s->refcount_block_cache) {
BLKDBG_EVENT(bs->file, BLKDBG_REFBLOCK_UPDATE_PART);
} else if (c == s->l2_table_cache) {
@@ -200,24 +185,6 @@ void qcow2_cache_depends_on_flush(Qcow2Cache *c)
c->depends_on_flush = true;
}
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c)
{
int ret, i;
ret = qcow2_cache_flush(bs, c);
if (ret < 0) {
return ret;
}
for (i = 0; i < c->size; i++) {
assert(c->entries[i].ref == 0);
c->entries[i].offset = 0;
c->entries[i].cache_hits = 0;
}
return 0;
}
static int qcow2_cache_find_entry_to_replace(Qcow2Cache *c)
{
int i;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -23,9 +23,34 @@
*/
#include "qemu-common.h"
#include "block/block_int.h"
#include "block_int.h"
#include "block/qcow2.h"
typedef struct QEMU_PACKED QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
uint32_t l1_size;
uint16_t id_str_size;
uint16_t name_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
uint32_t vm_state_size;
uint32_t extra_data_size; /* for extension */
/* extra data follows */
/* id_str follows */
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
void qcow2_free_snapshots(BlockDriverState *bs)
{
BDRVQcowState *s = bs->opaque;
@@ -116,14 +141,8 @@ int qcow2_read_snapshots(BlockDriverState *bs)
}
offset += name_size;
sn->name[name_size] = '\0';
if (offset - s->snapshots_offset > QCOW_MAX_SNAPSHOTS_SIZE) {
ret = -EFBIG;
goto fail;
}
}
assert(offset - s->snapshots_offset <= INT_MAX);
s->snapshots_size = offset - s->snapshots_offset;
return 0;
@@ -144,7 +163,7 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
uint32_t nb_snapshots;
uint64_t snapshots_offset;
} QEMU_PACKED header_data;
int64_t offset, snapshots_offset = 0;
int64_t offset, snapshots_offset;
int ret;
/* compute the size of the snapshots */
@@ -156,35 +175,16 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
offset += sizeof(extra);
offset += strlen(sn->id_str);
offset += strlen(sn->name);
if (offset > QCOW_MAX_SNAPSHOTS_SIZE) {
ret = -EFBIG;
goto fail;
}
}
assert(offset <= INT_MAX);
snapshots_size = offset;
/* Allocate space for the new snapshot list */
snapshots_offset = qcow2_alloc_clusters(bs, snapshots_size);
bdrv_flush(bs->file);
offset = snapshots_offset;
if (offset < 0) {
ret = offset;
goto fail;
return offset;
}
ret = bdrv_flush(bs);
if (ret < 0) {
goto fail;
}
/* The snapshot list position has not yet been updated, so these clusters
* must indeed be completely free */
ret = qcow2_pre_write_overlap_check(bs, 0, offset, snapshots_size);
if (ret < 0) {
goto fail;
}
/* Write all snapshots to the new list */
for(i = 0; i < s->nb_snapshots; i++) {
@@ -208,7 +208,6 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
id_str_size = strlen(sn->id_str);
name_size = strlen(sn->name);
assert(id_str_size <= UINT16_MAX && name_size <= UINT16_MAX);
h.id_str_size = cpu_to_be16(id_str_size);
h.name_size = cpu_to_be16(name_size);
offset = align_offset(offset, 8);
@@ -260,17 +259,12 @@ static int qcow2_write_snapshots(BlockDriverState *bs)
}
/* free the old snapshot table */
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size,
QCOW2_DISCARD_SNAPSHOT);
qcow2_free_clusters(bs, s->snapshots_offset, s->snapshots_size);
s->snapshots_offset = snapshots_offset;
s->snapshots_size = snapshots_size;
return 0;
fail:
if (snapshots_offset > 0) {
qcow2_free_clusters(bs, snapshots_offset, snapshots_size,
QCOW2_DISCARD_ALWAYS);
}
return ret;
}
@@ -279,8 +273,7 @@ static void find_new_snapshot_id(BlockDriverState *bs,
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot *sn;
int i;
unsigned long id, id_max = 0;
int i, id, id_max = 0;
for(i = 0; i < s->nb_snapshots; i++) {
sn = s->snapshots + i;
@@ -288,50 +281,34 @@ static void find_new_snapshot_id(BlockDriverState *bs,
if (id > id_max)
id_max = id;
}
snprintf(id_str, id_str_size, "%lu", id_max + 1);
snprintf(id_str, id_str_size, "%d", id_max + 1);
}
static int find_snapshot_by_id_and_name(BlockDriverState *bs,
const char *id,
const char *name)
static int find_snapshot_by_id(BlockDriverState *bs, const char *id_str)
{
BDRVQcowState *s = bs->opaque;
int i;
if (id && name) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id) &&
!strcmp(s->snapshots[i].name, name)) {
return i;
}
}
} else if (id) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id)) {
return i;
}
}
} else if (name) {
for (i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].name, name)) {
return i;
}
}
for(i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].id_str, id_str))
return i;
}
return -1;
}
static int find_snapshot_by_id_or_name(BlockDriverState *bs,
const char *id_or_name)
static int find_snapshot_by_id_or_name(BlockDriverState *bs, const char *name)
{
int ret;
BDRVQcowState *s = bs->opaque;
int i, ret;
ret = find_snapshot_by_id_and_name(bs, id_or_name, NULL);
if (ret >= 0) {
ret = find_snapshot_by_id(bs, name);
if (ret >= 0)
return ret;
for(i = 0; i < s->nb_snapshots; i++) {
if (!strcmp(s->snapshots[i].name, name))
return i;
}
return find_snapshot_by_id_and_name(bs, NULL, id_or_name);
return -1;
}
/* if no id is provided, a new one is constructed */
@@ -345,10 +322,6 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
uint64_t *l1_table = NULL;
int64_t l1_table_offset;
if (s->nb_snapshots >= QCOW_MAX_SNAPSHOTS) {
return -EFBIG;
}
memset(sn, 0, sizeof(*sn));
/* Generate an ID if it wasn't passed */
@@ -357,7 +330,7 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
}
/* Check that the ID is unique */
if (find_snapshot_by_id_and_name(bs, sn_info->id_str, NULL) >= 0) {
if (find_snapshot_by_id(bs, sn_info->id_str) >= 0) {
return -EEXIST;
}
@@ -386,12 +359,6 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
l1_table[i] = cpu_to_be64(s->l1_table[i]);
}
ret = qcow2_pre_write_overlap_check(bs, 0, sn->l1_table_offset,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
goto fail;
}
ret = bdrv_pwrite(bs->file, sn->l1_table_offset, l1_table,
s->l1_size * sizeof(uint64_t));
if (ret < 0) {
@@ -411,6 +378,11 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
goto fail;
}
ret = bdrv_flush(bs);
if (ret < 0) {
goto fail;
}
/* Append the new snapshot to the snapshot list */
new_snapshot_list = g_malloc((s->nb_snapshots + 1) * sizeof(QCowSnapshot));
if (s->snapshots) {
@@ -425,19 +397,11 @@ int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info)
if (ret < 0) {
g_free(s->snapshots);
s->snapshots = old_snapshot_list;
s->nb_snapshots--;
goto fail;
}
g_free(old_snapshot_list);
/* The VM state isn't needed any more in the active L1 table; in fact, it
* hurts by causing expensive COW for the next snapshot. */
qcow2_discard_clusters(bs, qcow2_vm_state_offset(s),
align_offset(sn->vm_state_size, s->cluster_size)
>> BDRV_SECTOR_BITS,
QCOW2_DISCARD_NEVER);
#ifdef DEBUG_ALLOC
{
BdrvCheckResult result = {0};
@@ -512,12 +476,6 @@ int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id)
goto fail;
}
ret = qcow2_pre_write_overlap_check(bs, QCOW2_OL_ACTIVE_L1,
s->l1_table_offset, cur_l1_bytes);
if (ret < 0) {
goto fail;
}
ret = bdrv_pwrite_sync(bs->file, s->l1_table_offset, sn_l1_table,
cur_l1_bytes);
if (ret < 0) {
@@ -574,19 +532,15 @@ fail:
return ret;
}
int qcow2_snapshot_delete(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id)
{
BDRVQcowState *s = bs->opaque;
QCowSnapshot sn;
int snapshot_index, ret;
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_and_name(bs, snapshot_id, name);
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_id);
if (snapshot_index < 0) {
error_setg(errp, "Can't find the snapshot");
return -ENOENT;
}
sn = s->snapshots[snapshot_index];
@@ -598,8 +552,6 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
s->nb_snapshots--;
ret = qcow2_write_snapshots(bs);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to remove snapshot from snapshot list");
return ret;
}
@@ -617,17 +569,13 @@ int qcow2_snapshot_delete(BlockDriverState *bs,
ret = qcow2_update_snapshot_refcount(bs, sn.l1_table_offset,
sn.l1_size, -1);
if (ret < 0) {
error_setg_errno(errp, -ret, "Failed to free the cluster and L1 table");
return ret;
}
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t),
QCOW2_DISCARD_SNAPSHOT);
qcow2_free_clusters(bs, sn.l1_table_offset, sn.l1_size * sizeof(uint64_t));
/* must update the copied flag on the current cluster offsets */
ret = qcow2_update_snapshot_refcount(bs, s->l1_table_offset, s->l1_size, 0);
if (ret < 0) {
error_setg_errno(errp, -ret,
"Failed to update snapshot status in disk");
return ret;
}
@@ -669,10 +617,7 @@ int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab)
return s->nb_snapshots;
}
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp)
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name)
{
int i, snapshot_index;
BDRVQcowState *s = bs->opaque;
@@ -684,25 +629,18 @@ int qcow2_snapshot_load_tmp(BlockDriverState *bs,
assert(bs->read_only);
/* Search the snapshot */
snapshot_index = find_snapshot_by_id_and_name(bs, snapshot_id, name);
snapshot_index = find_snapshot_by_id_or_name(bs, snapshot_name);
if (snapshot_index < 0) {
error_setg(errp,
"Can't find snapshot");
return -ENOENT;
}
sn = &s->snapshots[snapshot_index];
/* Allocate and read in the snapshot's L1 table */
if (sn->l1_size > QCOW_MAX_L1_SIZE) {
error_setg(errp, "Snapshot L1 table too large");
return -EFBIG;
}
new_l1_bytes = sn->l1_size * sizeof(uint64_t);
new_l1_bytes = s->l1_size * sizeof(uint64_t);
new_l1_table = g_malloc0(align_offset(new_l1_bytes, 512));
ret = bdrv_pread(bs->file, sn->l1_table_offset, new_l1_table, new_l1_bytes);
if (ret < 0) {
error_setg(errp, "Failed to read l1 table for snapshot");
g_free(new_l1_table);
return ret;
}

File diff suppressed because it is too large Load Diff

View File

@@ -25,8 +25,8 @@
#ifndef BLOCK_QCOW2_H
#define BLOCK_QCOW2_H
#include "qemu/aes.h"
#include "block/coroutine.h"
#include "aes.h"
#include "qemu-coroutine.h"
//#define DEBUG_ALLOC
//#define DEBUG_ALLOC2
@@ -38,26 +38,13 @@
#define QCOW_CRYPT_AES 1
#define QCOW_MAX_CRYPT_CLUSTERS 32
#define QCOW_MAX_SNAPSHOTS 65536
/* 8 MB refcount table is enough for 2 PB images at 64k cluster size
* (128 GB for 512 byte clusters, 2 EB for 2 MB clusters) */
#define QCOW_MAX_REFTABLE_SIZE 0x800000
/* 32 MB L1 table is enough for 2 PB images at 64k cluster size
* (128 GB for 512 byte clusters, 2 EB for 2 MB clusters) */
#define QCOW_MAX_L1_SIZE 0x2000000
/* Allow for an average of 1k per snapshot table entry, should be plenty of
* space for snapshot names and IDs */
#define QCOW_MAX_SNAPSHOTS_SIZE (1024 * QCOW_MAX_SNAPSHOTS)
/* indicate that the refcount of the referenced cluster is exactly one. */
#define QCOW_OFLAG_COPIED (1ULL << 63)
#define QCOW_OFLAG_COPIED (1LL << 63)
/* indicate that the cluster is compressed (they never have the copied flag) */
#define QCOW_OFLAG_COMPRESSED (1ULL << 62)
#define QCOW_OFLAG_COMPRESSED (1LL << 62)
/* The cluster reads as all zeros */
#define QCOW_OFLAG_ZERO (1ULL << 0)
#define QCOW_OFLAG_ZERO (1LL << 0)
#define REFCOUNT_SHIFT 1 /* refcount size is 2 bytes */
@@ -71,21 +58,6 @@
#define DEFAULT_CLUSTER_SIZE 65536
#define QCOW2_OPT_LAZY_REFCOUNTS "lazy-refcounts"
#define QCOW2_OPT_DISCARD_REQUEST "pass-discard-request"
#define QCOW2_OPT_DISCARD_SNAPSHOT "pass-discard-snapshot"
#define QCOW2_OPT_DISCARD_OTHER "pass-discard-other"
#define QCOW2_OPT_OVERLAP "overlap-check"
#define QCOW2_OPT_OVERLAP_MAIN_HEADER "overlap-check.main-header"
#define QCOW2_OPT_OVERLAP_ACTIVE_L1 "overlap-check.active-l1"
#define QCOW2_OPT_OVERLAP_ACTIVE_L2 "overlap-check.active-l2"
#define QCOW2_OPT_OVERLAP_REFCOUNT_TABLE "overlap-check.refcount-table"
#define QCOW2_OPT_OVERLAP_REFCOUNT_BLOCK "overlap-check.refcount-block"
#define QCOW2_OPT_OVERLAP_SNAPSHOT_TABLE "overlap-check.snapshot-table"
#define QCOW2_OPT_OVERLAP_INACTIVE_L1 "overlap-check.inactive-l1"
#define QCOW2_OPT_OVERLAP_INACTIVE_L2 "overlap-check.inactive-l2"
typedef struct QCowHeader {
uint32_t magic;
uint32_t version;
@@ -108,33 +80,7 @@ typedef struct QCowHeader {
uint32_t refcount_order;
uint32_t header_length;
} QEMU_PACKED QCowHeader;
typedef struct QEMU_PACKED QCowSnapshotHeader {
/* header is 8 byte aligned */
uint64_t l1_table_offset;
uint32_t l1_size;
uint16_t id_str_size;
uint16_t name_size;
uint32_t date_sec;
uint32_t date_nsec;
uint64_t vm_clock_nsec;
uint32_t vm_state_size;
uint32_t extra_data_size; /* for extension */
/* extra data follows */
/* id_str follows */
/* name follows */
} QCowSnapshotHeader;
typedef struct QEMU_PACKED QCowSnapshotExtraData {
uint64_t vm_state_size_large;
uint64_t disk_size;
} QCowSnapshotExtraData;
} QCowHeader;
typedef struct QCowSnapshot {
uint64_t l1_table_offset;
@@ -167,12 +113,9 @@ enum {
/* Incompatible feature bits */
enum {
QCOW2_INCOMPAT_DIRTY_BITNR = 0,
QCOW2_INCOMPAT_CORRUPT_BITNR = 1,
QCOW2_INCOMPAT_DIRTY = 1 << QCOW2_INCOMPAT_DIRTY_BITNR,
QCOW2_INCOMPAT_CORRUPT = 1 << QCOW2_INCOMPAT_CORRUPT_BITNR,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY
| QCOW2_INCOMPAT_CORRUPT,
QCOW2_INCOMPAT_MASK = QCOW2_INCOMPAT_DIRTY,
};
/* Compatible feature bits */
@@ -183,28 +126,12 @@ enum {
QCOW2_COMPAT_FEAT_MASK = QCOW2_COMPAT_LAZY_REFCOUNTS,
};
enum qcow2_discard_type {
QCOW2_DISCARD_NEVER = 0,
QCOW2_DISCARD_ALWAYS,
QCOW2_DISCARD_REQUEST,
QCOW2_DISCARD_SNAPSHOT,
QCOW2_DISCARD_OTHER,
QCOW2_DISCARD_MAX
};
typedef struct Qcow2Feature {
uint8_t type;
uint8_t bit;
char name[46];
} QEMU_PACKED Qcow2Feature;
typedef struct Qcow2DiscardRegion {
BlockDriverState *bs;
uint64_t offset;
uint64_t bytes;
QTAILQ_ENTRY(Qcow2DiscardRegion) next;
} Qcow2DiscardRegion;
typedef struct BDRVQcowState {
int cluster_bits;
int cluster_size;
@@ -230,8 +157,8 @@ typedef struct BDRVQcowState {
uint64_t *refcount_table;
uint64_t refcount_table_offset;
uint32_t refcount_table_size;
uint64_t free_cluster_index;
uint64_t free_byte_offset;
int64_t free_cluster_index;
int64_t free_byte_offset;
CoMutex lock;
@@ -241,17 +168,11 @@ typedef struct BDRVQcowState {
AES_KEY aes_decrypt_key;
uint64_t snapshots_offset;
int snapshots_size;
unsigned int nb_snapshots;
int nb_snapshots;
QCowSnapshot *snapshots;
int flags;
int qcow_version;
bool use_lazy_refcounts;
int refcount_order;
bool discard_passthrough[QCOW2_DISCARD_MAX];
int overlap_check; /* bitmask of Qcow2MetadataOverlap values */
uint64_t incompatible_features;
uint64_t compatible_features;
@@ -260,8 +181,6 @@ typedef struct BDRVQcowState {
size_t unknown_header_fields_size;
void* unknown_header_fields;
QLIST_HEAD(, Qcow2UnknownHeaderExtension) unknown_header_ext;
QTAILQ_HEAD (, Qcow2DiscardRegion) discards;
bool cache_discards;
} BDRVQcowState;
/* XXX: use std qcow open function ? */
@@ -277,59 +196,17 @@ typedef struct QCowCreateState {
struct QCowAIOCB;
typedef struct Qcow2COWRegion {
/**
* Offset of the COW region in bytes from the start of the first cluster
* touched by the request.
*/
uint64_t offset;
/** Number of sectors to copy */
int nb_sectors;
} Qcow2COWRegion;
/**
* Describes an in-flight (part of a) write request that writes to clusters
* that are not referenced in their L2 table yet.
*/
/* XXX This could be private for qcow2-cluster.c */
typedef struct QCowL2Meta
{
/** Guest offset of the first newly allocated cluster */
uint64_t offset;
/** Host offset of the first newly allocated cluster */
uint64_t cluster_offset;
uint64_t alloc_offset;
/**
* Number of sectors from the start of the first allocated cluster to
* the end of the (possibly shortened) request
*/
int n_start;
int nb_available;
/** Number of newly allocated clusters */
int nb_clusters;
/**
* Requests that overlap with this allocation and wait to be restarted
* when the allocating request has completed.
*/
CoQueue dependent_requests;
/**
* The COW Region between the start of the first allocated cluster and the
* area the guest actually writes to.
*/
Qcow2COWRegion cow_start;
/**
* The COW Region between the area the guest actually writes to and the
* end of the last allocated cluster.
*/
Qcow2COWRegion cow_end;
/** Pointer to next L2Meta of the same write request */
struct QCowL2Meta *next;
QLIST_ENTRY(QCowL2Meta) next_in_flight;
} QCowL2Meta;
@@ -340,93 +217,29 @@ enum {
QCOW2_CLUSTER_ZERO
};
typedef enum QCow2MetadataOverlap {
QCOW2_OL_MAIN_HEADER_BITNR = 0,
QCOW2_OL_ACTIVE_L1_BITNR = 1,
QCOW2_OL_ACTIVE_L2_BITNR = 2,
QCOW2_OL_REFCOUNT_TABLE_BITNR = 3,
QCOW2_OL_REFCOUNT_BLOCK_BITNR = 4,
QCOW2_OL_SNAPSHOT_TABLE_BITNR = 5,
QCOW2_OL_INACTIVE_L1_BITNR = 6,
QCOW2_OL_INACTIVE_L2_BITNR = 7,
QCOW2_OL_MAX_BITNR = 8,
QCOW2_OL_NONE = 0,
QCOW2_OL_MAIN_HEADER = (1 << QCOW2_OL_MAIN_HEADER_BITNR),
QCOW2_OL_ACTIVE_L1 = (1 << QCOW2_OL_ACTIVE_L1_BITNR),
QCOW2_OL_ACTIVE_L2 = (1 << QCOW2_OL_ACTIVE_L2_BITNR),
QCOW2_OL_REFCOUNT_TABLE = (1 << QCOW2_OL_REFCOUNT_TABLE_BITNR),
QCOW2_OL_REFCOUNT_BLOCK = (1 << QCOW2_OL_REFCOUNT_BLOCK_BITNR),
QCOW2_OL_SNAPSHOT_TABLE = (1 << QCOW2_OL_SNAPSHOT_TABLE_BITNR),
QCOW2_OL_INACTIVE_L1 = (1 << QCOW2_OL_INACTIVE_L1_BITNR),
/* NOTE: Checking overlaps with inactive L2 tables will result in bdrv
* reads. */
QCOW2_OL_INACTIVE_L2 = (1 << QCOW2_OL_INACTIVE_L2_BITNR),
} QCow2MetadataOverlap;
/* Perform all overlap checks which can be done in constant time */
#define QCOW2_OL_CONSTANT \
(QCOW2_OL_MAIN_HEADER | QCOW2_OL_ACTIVE_L1 | QCOW2_OL_REFCOUNT_TABLE | \
QCOW2_OL_SNAPSHOT_TABLE)
/* Perform all overlap checks which don't require disk access */
#define QCOW2_OL_CACHED \
(QCOW2_OL_CONSTANT | QCOW2_OL_ACTIVE_L2 | QCOW2_OL_REFCOUNT_BLOCK | \
QCOW2_OL_INACTIVE_L1)
/* Perform all overlap checks */
#define QCOW2_OL_ALL \
(QCOW2_OL_CACHED | QCOW2_OL_INACTIVE_L2)
#define L1E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L2E_OFFSET_MASK 0x00fffffffffffe00ULL
#define L1E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_OFFSET_MASK 0x00ffffffffffff00ULL
#define L2E_COMPRESSED_OFFSET_SIZE_MASK 0x3fffffffffffffffULL
#define REFT_OFFSET_MASK 0xfffffffffffffe00ULL
static inline int64_t start_of_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & ~(s->cluster_size - 1);
}
static inline int64_t offset_into_cluster(BDRVQcowState *s, int64_t offset)
{
return offset & (s->cluster_size - 1);
}
#define REFT_OFFSET_MASK 0xffffffffffffff00ULL
static inline int size_to_clusters(BDRVQcowState *s, int64_t size)
{
return (size + (s->cluster_size - 1)) >> s->cluster_bits;
}
static inline int64_t size_to_l1(BDRVQcowState *s, int64_t size)
static inline int size_to_l1(BDRVQcowState *s, int64_t size)
{
int shift = s->cluster_bits + s->l2_bits;
return (size + (1ULL << shift) - 1) >> shift;
}
static inline int offset_to_l2_index(BDRVQcowState *s, int64_t offset)
{
return (offset >> s->cluster_bits) & (s->l2_size - 1);
}
static inline int64_t align_offset(int64_t offset, int n)
{
offset = (offset + n - 1) & ~(n - 1);
return offset;
}
static inline int64_t qcow2_vm_state_offset(BDRVQcowState *s)
{
return (int64_t)s->l1_vm_state_index << (s->cluster_bits + s->l2_bits);
}
static inline uint64_t qcow2_max_refcount_clusters(BDRVQcowState *s)
{
return QCOW_MAX_REFTABLE_SIZE >> s->cluster_bits;
}
static inline int qcow2_get_cluster_type(uint64_t l2_entry)
{
if (l2_entry & QCOW_OFLAG_COMPRESSED) {
@@ -446,44 +259,25 @@ static inline bool qcow2_need_accurate_refcounts(BDRVQcowState *s)
return !(s->incompatible_features & QCOW2_INCOMPAT_DIRTY);
}
static inline uint64_t l2meta_cow_start(QCowL2Meta *m)
{
return m->offset + m->cow_start.offset;
}
static inline uint64_t l2meta_cow_end(QCowL2Meta *m)
{
return m->offset + m->cow_end.offset
+ (m->cow_end.nb_sectors << BDRV_SECTOR_BITS);
}
// FIXME Need qcow2_ prefix to global functions
/* qcow2.c functions */
int qcow2_backing_read1(BlockDriverState *bs, QEMUIOVector *qiov,
int64_t sector_num, int nb_sectors);
int qcow2_mark_dirty(BlockDriverState *bs);
int qcow2_mark_corrupt(BlockDriverState *bs);
int qcow2_mark_consistent(BlockDriverState *bs);
int qcow2_update_header(BlockDriverState *bs);
/* qcow2-refcount.c functions */
int qcow2_refcount_init(BlockDriverState *bs);
void qcow2_refcount_close(BlockDriverState *bs);
int qcow2_update_cluster_refcount(BlockDriverState *bs, int64_t cluster_index,
int addend, enum qcow2_discard_type type);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, uint64_t size);
int64_t qcow2_alloc_clusters(BlockDriverState *bs, int64_t size);
int qcow2_alloc_clusters_at(BlockDriverState *bs, uint64_t offset,
int nb_clusters);
int64_t qcow2_alloc_bytes(BlockDriverState *bs, int size);
void qcow2_free_clusters(BlockDriverState *bs,
int64_t offset, int64_t size,
enum qcow2_discard_type type);
void qcow2_free_any_clusters(BlockDriverState *bs, uint64_t l2_entry,
int nb_clusters, enum qcow2_discard_type type);
int64_t offset, int64_t size);
void qcow2_free_any_clusters(BlockDriverState *bs,
uint64_t cluster_offset, int nb_clusters);
int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int64_t l1_table_offset, int l1_size, int addend);
@@ -491,17 +285,8 @@ int qcow2_update_snapshot_refcount(BlockDriverState *bs,
int qcow2_check_refcounts(BlockDriverState *bs, BdrvCheckResult *res,
BdrvCheckMode fix);
void qcow2_process_discards(BlockDriverState *bs, int ret);
int qcow2_check_metadata_overlap(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
int qcow2_pre_write_overlap_check(BlockDriverState *bs, int ign, int64_t offset,
int64_t size);
/* qcow2-cluster.c functions */
int qcow2_grow_l1_table(BlockDriverState *bs, uint64_t min_size,
bool exact_size);
int qcow2_write_l1_entry(BlockDriverState *bs, int l1_index);
int qcow2_grow_l1_table(BlockDriverState *bs, int min_size, bool exact_size);
void qcow2_l2_cache_reset(BlockDriverState *bs);
int qcow2_decompress_cluster(BlockDriverState *bs, uint64_t cluster_offset);
void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
@@ -512,30 +297,22 @@ void qcow2_encrypt_sectors(BDRVQcowState *s, int64_t sector_num,
int qcow2_get_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *cluster_offset);
int qcow2_alloc_cluster_offset(BlockDriverState *bs, uint64_t offset,
int *num, uint64_t *host_offset, QCowL2Meta **m);
int n_start, int n_end, int *num, QCowL2Meta *m);
uint64_t qcow2_alloc_compressed_cluster_offset(BlockDriverState *bs,
uint64_t offset,
int compressed_size);
int qcow2_alloc_cluster_link_l2(BlockDriverState *bs, QCowL2Meta *m);
int qcow2_discard_clusters(BlockDriverState *bs, uint64_t offset,
int nb_sectors, enum qcow2_discard_type type);
int nb_sectors);
int qcow2_zero_clusters(BlockDriverState *bs, uint64_t offset, int nb_sectors);
int qcow2_expand_zero_clusters(BlockDriverState *bs);
/* qcow2-snapshot.c functions */
int qcow2_snapshot_create(BlockDriverState *bs, QEMUSnapshotInfo *sn_info);
int qcow2_snapshot_goto(BlockDriverState *bs, const char *snapshot_id);
int qcow2_snapshot_delete(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp);
int qcow2_snapshot_delete(BlockDriverState *bs, const char *snapshot_id);
int qcow2_snapshot_list(BlockDriverState *bs, QEMUSnapshotInfo **psn_tab);
int qcow2_snapshot_load_tmp(BlockDriverState *bs,
const char *snapshot_id,
const char *name,
Error **errp);
int qcow2_snapshot_load_tmp(BlockDriverState *bs, const char *snapshot_name);
void qcow2_free_snapshots(BlockDriverState *bs);
int qcow2_read_snapshots(BlockDriverState *bs);
@@ -550,8 +327,6 @@ int qcow2_cache_set_dependency(BlockDriverState *bs, Qcow2Cache *c,
Qcow2Cache *dependency);
void qcow2_cache_depends_on_flush(Qcow2Cache *c);
int qcow2_cache_empty(BlockDriverState *bs, Qcow2Cache *c);
int qcow2_cache_get(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,
void **table);
int qcow2_cache_get_empty(BlockDriverState *bs, Qcow2Cache *c, uint64_t offset,

Some files were not shown because too many files have changed in this diff Show More