Compare commits

...

26 Commits

Author SHA1 Message Date
Fabiano Rosas
3955d1e016 tests/migration/guestperf: Add file, fixed-ram and direct-io support
Add support to the new migration features:
- 'file' transport;
- 'fixed-ram' stream format capability;
- 'direct-io' parameter;

Usage:
$ ./guestperf.py --binary <path/to/qemu> --initrd <path/to/initrd-stress.img> \
                 --transport file --dst-file migfile --multifd --fixed-ram \
		 --multifd-channels 4 --output fixed-ram.json  --verbose

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
3793ccc532 migration: Add direct-io parameter
Add the direct-io migration parameter that tells the migration code to
use O_DIRECT when opening the migration stream file whenever possible.

This is currently only used for the secondary channels of fixed-ram
migration, which can guarantee that writes are page aligned.

However the parameter could be made to affect other types of
file-based migrations in the future.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
7dc0ef7355 tests/qtest: Add a multifd + fixed-ram migration test
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
b52fcde6cb migration/multifd: Support incoming fixed-ram stream format
For the incoming fixed-ram migration we need to read the ramblock
headers, get the pages bitmap and send the host address of each
non-zero page to the multifd channel thread for writing.

To read from the migration file we need a preadv function that can
read into the iovs in segments of contiguous pages because (as in the
writing case) the file offset applies to the entire iovec.

Usage on HMP is:

(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate_incoming file:migfile
(qemu) info status
(qemu) c

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
6ddd5a3bea migration/multifd: Support outgoing fixed-ram stream format
The new fixed-ram stream format uses a file transport and puts ram
pages in the migration file at their respective offsets and can be
done in parallel by using the pwritev system call which takes iovecs
and an offset.

Add support to enabling the new format along with multifd to make use
of the threading and page handling already in place.

This requires multifd to stop sending headers and leaving the stream
format to the fixed-ram code. When it comes time to write the data, we
need to call a version of qio_channel_write that can take an offset.

Usage on HMP is:

(qemu) stop
(qemu) migrate_set_capability multifd on
(qemu) migrate_set_capability fixed-ram on
(qemu) migrate_set_parameter max-bandwidth 0
(qemu) migrate_set_parameter multifd-channels 8
(qemu) migrate file:migfile

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
0b9d3b3539 migration/ram: Add a wrapper for fixed-ram shadow bitmap
We'll need to set the shadow_bmap bits from outside ram.c soon and
TARGET_PAGE_BITS is poisoned, so add a wrapper to it.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
770c906061 io: Add a pwritev/preadv version that takes a discontiguous iovec
For the upcoming support to fixed-ram migration with multifd, we need
to be able to accept an iovec array with non-contiguous data.

Add a pwritev and preadv version that splits the array into contiguous
segments before writing. With that we can have the ram code continue
to add pages in any order and the multifd code continue to send large
arrays for reading and writing.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
Since iovs can be non contiguous, we'd need a separate array on the
side to carry an extra file offset for each of them, so I'm relying on
the fact that iovs are all within a same host page and passing in an
encoded offset that takes the host page into account.
2023-03-30 14:30:46 -03:00
Fabiano Rosas
5bfa50b698 migration/multifd: Add pages to the receiving side
Currently multifd does not need to have knowledge of pages on the
receiving side because all the information needed is within the
packets that come in the stream.

We're about to add support to fixed-ram migration, which cannot use
packets because it expects the ramblock section in the migration file
to contain only the guest pages data.

Add a pointer to MultiFDPages in the multifd_recv_state and use the
pages similarly to what we already do on the sending side. The pages
are used to transfer data between the ram migration code in the main
migration thread and the multifd receiving threads.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
4777036420 migration/multifd: Add incoming QIOChannelFile support
On the receiving side we don't need to differentiate between main
channel and threads, so whichever channel is defined first gets to be
the main one. And since there are no packets, use the atomic channel
count to index into the params array.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
dadf694c4d migration/multifd: Add outgoing QIOChannelFile support
Allow multifd to open file-backed channels. This will be used when
enabling the fixed-ram migration stream format which expects a
seekable transport.

The QIOChannel read and write methods will use the preadv/pwritev
versions which don't update the file offset at each call so we can
reuse the fd without re-opening for every channel.

Note that this is just setup code and multifd cannot yet make use of
the file channels.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
024ea0f982 migration/multifd: Allow multifd without packets
For the upcoming support to the new 'fixed-ram' migration stream
format, we cannot use multifd packets because each write into the
ramblock section in the migration file is expected to contain only the
guest pages. They are written at their respective offsets relative to
the ramblock section header.

There is no space for the packet information and the expected gains
from the new approach come partly from being able to write the pages
sequentially without extraneous data in between.

The new format also doesn't need the packets and all necessary
information can be taken from the standard migration headers with some
(future) changes to multifd code.

Use the presence of the fixed-ram capability to decide whether to send
packets. For now this has no effect as fixed-ram cannot yet be enabled
with multifd.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
b97556e072 migration/multifd: Remove direct "socket" references
We're about to enable support for other transports in multifd, so
remove direct references to sockets.

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Fabiano Rosas
a25b0126f0 migration: Add completion tracepoint
Add a completion tracepoint that provides basic stats for
debug. Displays throughput (MB/s and pages/s) and total time (ms).

Usage:
  $QEMU ... -trace migration_status

Output:
  migration_status 1506 MB/s, 436725 pages/s, 8698 ms

Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Nikolay Borisov
728342fd53 tests/qtest: migration-test: Add tests for fixed-ram file-based migration
Add basic tests for 'fixed-ram' migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:46 -03:00
Nikolay Borisov
0e011a3b6a migration: Add support for 'fixed-ram' migration restore
Add the necessary code to parse the format changes for the 'fixed-ram'
capability.

One of the more notable changes in behavior is that in the 'fixed-ram'
case ram pages are restored in one go rather than constantly looping
through the migration stream.

Also due to idiosyncrasies of the format I have added the
'ram_migrated' since it was easier to simply return directly from
->load_state rather than introducing more conditionals around the code
to prevent ->load_state being called multiple times (from
qemu_loadvm_section_start_full/qemu_loadvm_section_part_end i.e. from
multiple QEMU_VM_SECTION_(PART|END) flags).

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:30:44 -03:00
Nikolay Borisov
6295f0cc3e migration: Refactor precopy ram loading code
To facilitate the implementation of the 'fixed-ram' migration restore,
factor out the code responsible for parsing the ramblocks
headers. This also makes ram_load_precopy easier to comprehend.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:29:26 -03:00
Nikolay Borisov
a416893903 migration/ram: Introduce 'fixed-ram' migration stream capability
Implement 'fixed-ram' feature. The core of the feature is to ensure that
each ram page of the migration stream has a specific offset in the
resulting migration stream. The reason why we'd want such behavior are
two fold:

 - When doing a 'fixed-ram' migration the resulting file will have a
   bounded size, since pages which are dirtied multiple times will
   always go to a fixed location in the file, rather than constantly
   being added to a sequential stream. This eliminates cases where a vm
   with, say, 1G of ram can result in a migration file that's 10s of
   GBs, provided that the workload constantly redirties memory.

 - It paves the way to implement DIO-enabled save/restore of the
   migration stream as the pages are ensured to be written at aligned
   offsets.

The feature requires changing the stream format. First, a bitmap is
introduced which tracks which pages have been written (i.e are
dirtied) during migration and subsequently it's being written in the
resulting file, again at a fixed location for every ramblock. Zero
pages are ignored as they'd be zero in the destination migration as
well. With the changed format data would look like the following:

|name len|name|used_len|pc*|bitmap_size|pages_offset|bitmap|pages|

* pc - refers to the page_size/mr->addr members, so newly added members
begin from "bitmap_size".

This layout is initialized during ram_save_setup so instead of having a
sequential stream of pages that follow the ramblock headers the dirty
pages for a ramblock follow its header. Since all pages have a fixed
location RAM_SAVE_FLAG_EOS is no longer generated on every migration
iteration but there is effectively a single RAM_SAVE_FLAG_EOS right at
the end.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:29:24 -03:00
Nikolay Borisov
fb4c0ef7f2 migration/qemu-file: add utility methods for working with seekable channels
Add utility methods that will be needed when implementing 'fixed-ram'
migration capability.

qemu_file_is_seekable
qemu_put_buffer_at
qemu_get_buffer_at
qemu_set_offset
qemu_get_offset

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
fixed total_transferred accounting

restructured to use qio_channel_file_preadv instead of the _full
variant
2023-03-30 14:28:02 -03:00
Nikolay Borisov
5cd4d9439a io: implement io_pwritev/preadv for QIOChannelFile
The upcoming 'fixed-ram' feature will require qemu to write data to
(and restore from) specific offsets of the migration file.

Add a minimal implementation of pwritev/preadv and expose them via the
io_pwritev and io_preadv interfaces.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
069e252249 io: Add generic pwritev/preadv interface
Introduce basic pwritev/preadv support in the generic channel layer.
Specific implementation will follow for the file channel as this is
required in order to support migration streams with fixed location of
each ram page.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
b7be388a39 io: add and implement QIO_CHANNEL_FEATURE_SEEKABLE for channel file
Add a generic QIOChannel feature SEEKABLE which would be used by the
qemu_file* apis. For the time being this will be only implemented for
file channels.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
707be34e9e migration: Initial support of fixed-ram feature for analyze-migration.py
In order to allow analyze-migration.py script to work with migration
streams that have the 'fixed-ram' capability, it's required to have
access to the stream's configuration object. This commit enables this
by making migration json writer part of MigrationState struct,
allowing the configuration object be serialized to json.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
e5e85ac9de tests/qtest: migration-test: Add tests for file-based migration
Add basic tests for file-based migration.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
(farosas) fix segfault when connect_uri is not set
2023-03-30 14:25:46 -03:00
Nikolay Borisov
0608a4483e tests/qtest: migration: Add migrate_incoming_qmp helper
file-based migration requires the target to initiate its migration after
the source has finished writing out the data in the file. Currently
there's no easy way to initiate 'migrate-incoming', allow this by
introducing migrate_incoming_qmp helper, similarly to migrate_qmp.

Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
cc582ebb99 migration: Add support for 'file:' uri for incoming migration
This is a counterpart to the 'file:' uri support for source migration,
now a file can also serve as the source of an incoming migration.

Unlike other migration protocol backends, the 'file' protocol cannot
honour non-blocking mode. POSIX file/block storage will always report
ready to read/write, regardless of how slow the underlying storage
will be at servicing the request.

For incoming migration this limitation may result in the main event
loop not being fully responsive while loading the VM state. This
won't impact the VM since it is not running at this phase, however,
it may impact management applications.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:25:46 -03:00
Nikolay Borisov
790cc25f52 migration: Add support for 'file:' uri for source migration
Implement support for a "file:" uri so that a migration can be initiated
directly to a file from QEMU.

Unlike other migration protocol backends, the 'file' protocol cannot
honour non-blocking mode. POSIX file/block storage will always report
ready to read/write, regardless of how slow the underlying storage
will be at servicing the request.

For outgoing migration this limitation is not a serious problem as
the migration data transfer always happens in a dedicated thread.
It may, however, result in delays in honouring a request to cancel
the migration operation.

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Fabiano Rosas <farosas@suse.de>
2023-03-30 14:25:44 -03:00
31 changed files with 1547 additions and 154 deletions

View File

@@ -39,6 +39,8 @@ over any transport.
- exec migration: do the migration using the stdin/stdout through a process.
- fd migration: do the migration using a file descriptor that is
passed to QEMU. QEMU doesn't care how this file descriptor is opened.
- file migration: do the migration using a file that is passed by name
to QEMU.
In addition, support is included for migration using RDMA, which
transports the page data using ``RDMA``, where the hardware takes care of
@@ -566,6 +568,42 @@ Others (especially either older devices or system devices which for
some reason don't have a bus concept) make use of the ``instance id``
for otherwise identically named devices.
Fixed-ram format
----------------
When the ``fixed-ram`` capability is enabled, a slightly different
stream format is used for the RAM section. Instead of having a
sequential stream of pages that follow the RAMBlock headers, the dirty
pages for a RAMBlock follow its header. This ensures that each RAM
page has a fixed offset in the resulting migration stream.
- RAMBlock 1
- ID string length
- ID string
- Used size
- Shadow bitmap size
- Pages offset in migration stream*
- Shadow bitmap
- Sequence of pages for RAMBlock 1 (* offset points here)
- RAMBlock 2
- ID string length
- ID string
- Used size
- Shadow bitmap size
- Pages offset in migration stream*
- Shadow bitmap
- Sequence of pages for RAMBlock 2 (* offset points here)
The ``fixed-ram`` capaility can be enabled in both source and
destination with:
``migrate_set_capability fixed-ram on``
Return path
-----------

View File

@@ -43,6 +43,14 @@ struct RAMBlock {
size_t page_size;
/* dirty bitmap used during migration */
unsigned long *bmap;
/* shadow dirty bitmap used when migrating to a file */
unsigned long *shadow_bmap;
/*
* offset in the file pages belonging to this ramblock are saved,
* used only during migration to a file.
*/
off_t bitmap_offset;
uint64_t pages_offset;
/* bitmap of already received pages in postcopy */
unsigned long *receivedmap;

View File

@@ -22,6 +22,7 @@
#define QIO_CHANNEL_FILE_H
#include "io/channel.h"
#include "io/task.h"
#include "qom/object.h"
#define TYPE_QIO_CHANNEL_FILE "qio-channel-file"

View File

@@ -33,8 +33,10 @@ OBJECT_DECLARE_TYPE(QIOChannel, QIOChannelClass,
#define QIO_CHANNEL_ERR_BLOCK -2
#define QIO_CHANNEL_WRITE_FLAG_ZERO_COPY 0x1
#define QIO_CHANNEL_WRITE_FLAG_WITH_OFFSET 0x2
#define QIO_CHANNEL_READ_FLAG_MSG_PEEK 0x1
#define QIO_CHANNEL_READ_FLAG_WITH_OFFSET 0x2
typedef enum QIOChannelFeature QIOChannelFeature;
@@ -44,6 +46,7 @@ enum QIOChannelFeature {
QIO_CHANNEL_FEATURE_LISTEN,
QIO_CHANNEL_FEATURE_WRITE_ZERO_COPY,
QIO_CHANNEL_FEATURE_READ_MSG_PEEK,
QIO_CHANNEL_FEATURE_SEEKABLE,
};
@@ -128,6 +131,16 @@ struct QIOChannelClass {
Error **errp);
/* Optional callbacks */
ssize_t (*io_pwritev)(QIOChannel *ioc,
const struct iovec *iov,
size_t niov,
off_t offset,
Error **errp);
ssize_t (*io_preadv)(QIOChannel *ioc,
const struct iovec *iov,
size_t niov,
off_t offset,
Error **errp);
int (*io_shutdown)(QIOChannel *ioc,
QIOChannelShutdown how,
Error **errp);
@@ -510,6 +523,126 @@ int qio_channel_set_blocking(QIOChannel *ioc,
int qio_channel_close(QIOChannel *ioc,
Error **errp);
/**
* qio_channel_pwritev_full
* @ioc: the channel object
* @iov: the array of memory regions to write data from
* @niov: the length of the @iov array
* @offset: offset in the channel where writes should begin
* @errp: pointer to a NULL-initialized error object
*
* Not all implementations will support this facility, so may report
* an error. To avoid errors, the caller may check for the feature
* flag QIO_CHANNEL_FEATURE_SEEKABLE prior to calling this method.
*
* Behaves as qio_channel_writev_full, apart from not supporting
* sending of file handles as well as beginning the write at the
* passed @offset
*
*/
ssize_t qio_channel_pwritev_full(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset, Error **errp);
/**
* qio_channel_write_full_all:
* @ioc: the channel object
* @iov: the array of memory regions to write data from
* @niov: the length of the @iov array
* @offset: the iovec offset in the file where to write the data
* @fds: an array of file handles to send
* @nfds: number of file handles in @fds
* @flags: write flags (QIO_CHANNEL_WRITE_FLAG_*)
* @errp: pointer to a NULL-initialized error object
*
*
* Selects between a writev or pwritev channel writer function.
*
* If QIO_CHANNEL_WRITE_FLAG_OFFSET is passed in flags, pwritev is
* used and @offset is expected to be a meaningful value, @fds and
* @nfds are ignored; otherwise uses writev and @offset is ignored.
*
* Returns: 0 if all bytes were written, or -1 on error
*/
int qio_channel_write_full_all(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset, int *fds, size_t nfds,
int flags, Error **errp);
/**
* qio_channel_pwritev
* @ioc: the channel object
* @buf: the memory region to write data into
* @buflen: the number of bytes to @buf
* @offset: offset in the channel where writes should begin
* @errp: pointer to a NULL-initialized error object
*
* Not all implementations will support this facility, so may report
* an error. To avoid errors, the caller may check for the feature
* flag QIO_CHANNEL_FEATURE_SEEKABLE prior to calling this method.
*
*/
ssize_t qio_channel_pwritev(QIOChannel *ioc, char *buf, size_t buflen,
off_t offset, Error **errp);
/**
* qio_channel_preadv_full
* @ioc: the channel object
* @iov: the array of memory regions to read data into
* @niov: the length of the @iov array
* @offset: offset in the channel where writes should begin
* @errp: pointer to a NULL-initialized error object
*
* Not all implementations will support this facility, so may report
* an error. To avoid errors, the caller may check for the feature
* flag QIO_CHANNEL_FEATURE_SEEKABLE prior to calling this method.
*
* Behaves as qio_channel_readv_full, apart from not supporting
* receiving of file handles as well as beginning the read at the
* passed @offset
*
*/
ssize_t qio_channel_preadv_full(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset, Error **errp);
/**
* qio_channel_read_full_all:
* @ioc: the channel object
* @iov: the array of memory regions to read data to
* @niov: the length of the @iov array
* @offset: the iovec offset in the file from where to read the data
* @fds: an array of file handles to send
* @nfds: number of file handles in @fds
* @flags: read flags (QIO_CHANNEL_READ_FLAG_*)
* @errp: pointer to a NULL-initialized error object
*
*
* Selects between a readv or preadv channel reader function.
*
* If QIO_CHANNEL_READ_FLAG_OFFSET is passed in flags, preadv is
* used and @offset is expected to be a meaningful value, @fds and
* @nfds are ignored; otherwise uses readv and @offset is ignored.
*
* Returns: 0 if all bytes were read, or -1 on error
*/
int qio_channel_read_full_all(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset,
int flags, Error **errp);
/**
* qio_channel_preadv
* @ioc: the channel object
* @buf: the memory region to write data into
* @buflen: the number of bytes to @buf
* @offset: offset in the channel where writes should begin
* @errp: pointer to a NULL-initialized error object
*
* Not all implementations will support this facility, so may report
* an error. To avoid errors, the caller may check for the feature
* flag QIO_CHANNEL_FEATURE_SEEKABLE prior to calling this method.
*
*/
ssize_t qio_channel_preadv(QIOChannel *ioc, char *buf, size_t buflen,
off_t offset, Error **errp);
/**
* qio_channel_shutdown:
* @ioc: the channel object

View File

@@ -50,6 +50,8 @@ unsigned int qemu_get_be16(QEMUFile *f);
unsigned int qemu_get_be32(QEMUFile *f);
uint64_t qemu_get_be64(QEMUFile *f);
bool qemu_file_is_seekable(QEMUFile *f);
static inline void qemu_put_be64s(QEMUFile *f, const uint64_t *pv)
{
qemu_put_be64(f, *pv);

View File

@@ -570,6 +570,8 @@ int qemu_lock_fd_test(int fd, int64_t start, int64_t len, bool exclusive);
bool qemu_has_ofd_lock(void);
#endif
bool qemu_has_direct_io(void);
#if defined(__HAIKU__) && defined(__i386__)
#define FMT_pid "%ld"
#elif defined(WIN64)

View File

@@ -35,6 +35,10 @@ qio_channel_file_new_fd(int fd)
ioc->fd = fd;
if (lseek(fd, 0, SEEK_CUR) != (off_t)-1) {
qio_channel_set_feature(QIO_CHANNEL(ioc), QIO_CHANNEL_FEATURE_SEEKABLE);
}
trace_qio_channel_file_new_fd(ioc, fd);
return ioc;
@@ -59,6 +63,10 @@ qio_channel_file_new_path(const char *path,
return NULL;
}
if (lseek(ioc->fd, 0, SEEK_CUR) != (off_t)-1) {
qio_channel_set_feature(QIO_CHANNEL(ioc), QIO_CHANNEL_FEATURE_SEEKABLE);
}
trace_qio_channel_file_new_path(ioc, path, flags, mode, ioc->fd);
return ioc;
@@ -137,6 +145,56 @@ static ssize_t qio_channel_file_writev(QIOChannel *ioc,
return ret;
}
static ssize_t qio_channel_file_preadv(QIOChannel *ioc,
const struct iovec *iov,
size_t niov,
off_t offset,
Error **errp)
{
QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
ssize_t ret;
retry:
ret = preadv(fioc->fd, iov, niov, offset);
if (ret < 0) {
if (errno == EAGAIN) {
return QIO_CHANNEL_ERR_BLOCK;
}
if (errno == EINTR) {
goto retry;
}
error_setg_errno(errp, errno, "Unable to read from file");
return -1;
}
return ret;
}
static ssize_t qio_channel_file_pwritev(QIOChannel *ioc,
const struct iovec *iov,
size_t niov,
off_t offset,
Error **errp)
{
QIOChannelFile *fioc = QIO_CHANNEL_FILE(ioc);
ssize_t ret;
retry:
ret = pwritev(fioc->fd, iov, niov, offset);
if (ret <= 0) {
if (errno == EAGAIN) {
return QIO_CHANNEL_ERR_BLOCK;
}
if (errno == EINTR) {
goto retry;
}
error_setg_errno(errp, errno, "Unable to write to file");
return -1;
}
return ret;
}
static int qio_channel_file_set_blocking(QIOChannel *ioc,
bool enabled,
Error **errp)
@@ -219,6 +277,8 @@ static void qio_channel_file_class_init(ObjectClass *klass,
ioc_klass->io_writev = qio_channel_file_writev;
ioc_klass->io_readv = qio_channel_file_readv;
ioc_klass->io_set_blocking = qio_channel_file_set_blocking;
ioc_klass->io_pwritev = qio_channel_file_pwritev;
ioc_klass->io_preadv = qio_channel_file_preadv;
ioc_klass->io_seek = qio_channel_file_seek;
ioc_klass->io_close = qio_channel_file_close;
ioc_klass->io_create_watch = qio_channel_file_create_watch;

View File

@@ -445,6 +445,146 @@ GSource *qio_channel_add_watch_source(QIOChannel *ioc,
}
ssize_t qio_channel_pwritev_full(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset, Error **errp)
{
QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc);
if (!klass->io_pwritev) {
error_setg(errp, "Channel does not support pwritev");
return -1;
}
if (!qio_channel_has_feature(ioc, QIO_CHANNEL_FEATURE_SEEKABLE)) {
error_setg_errno(errp, EINVAL, "Requested channel is not seekable");
return -1;
}
return klass->io_pwritev(ioc, iov, niov, offset, errp);
}
static int qio_channel_preadv_pwritev_contiguous(QIOChannel *ioc,
const struct iovec *iov,
size_t niov, off_t offset,
bool is_write, Error **errp)
{
ssize_t ret;
int i, slice_idx, slice_num;
uint64_t base, next, file_offset;
size_t len;
slice_idx = 0;
slice_num = 1;
/*
* If the iov array doesn't have contiguous elements, we need to
* split it in slices because we only have one (file) 'offset' for
* the whole iov. Do this here so callers don't need to break the
* iov array themselves.
*/
for (i = 0; i < niov; i++, slice_num++) {
base = (uint64_t) iov[i].iov_base;
if (i != niov - 1) {
len = iov[i].iov_len;
next = (uint64_t) iov[i + 1].iov_base;
if (base + len == next) {
continue;
}
}
/*
* Use the offset of the first element of the segment that
* we're sending.
*/
file_offset = offset + (uint64_t) iov[slice_idx].iov_base;
if (is_write) {
ret = qio_channel_pwritev_full(ioc, &iov[slice_idx], slice_num,
file_offset, errp);
} else {
ret = qio_channel_preadv_full(ioc, &iov[slice_idx], slice_num,
file_offset, errp);
}
if (ret < 0) {
break;
}
slice_idx += slice_num;
slice_num = 0;
}
return (ret < 0) ? -1 : 0;
}
int qio_channel_write_full_all(QIOChannel *ioc,
const struct iovec *iov,
size_t niov, off_t offset,
int *fds, size_t nfds,
int flags, Error **errp)
{
if (flags & QIO_CHANNEL_WRITE_FLAG_WITH_OFFSET) {
return qio_channel_preadv_pwritev_contiguous(ioc, iov, niov,
offset, true, errp);
}
return qio_channel_writev_full_all(ioc, iov, niov, NULL, 0, flags, errp);
}
ssize_t qio_channel_pwritev(QIOChannel *ioc, char *buf, size_t buflen,
off_t offset, Error **errp)
{
struct iovec iov = {
.iov_base = buf,
.iov_len = buflen
};
return qio_channel_pwritev_full(ioc, &iov, 1, offset, errp);
}
ssize_t qio_channel_preadv_full(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset, Error **errp)
{
QIOChannelClass *klass = QIO_CHANNEL_GET_CLASS(ioc);
if (!klass->io_preadv) {
error_setg(errp, "Channel does not support preadv");
return -1;
}
if (!qio_channel_has_feature(ioc, QIO_CHANNEL_FEATURE_SEEKABLE)) {
error_setg_errno(errp, EINVAL, "Requested channel is not seekable");
return -1;
}
return klass->io_preadv(ioc, iov, niov, offset, errp);
}
int qio_channel_read_full_all(QIOChannel *ioc, const struct iovec *iov,
size_t niov, off_t offset,
int flags, Error **errp)
{
if (flags & QIO_CHANNEL_READ_FLAG_WITH_OFFSET) {
return qio_channel_preadv_pwritev_contiguous(ioc, iov, niov,
offset, false, errp);
}
return qio_channel_readv_full_all(ioc, iov, niov, NULL, NULL, errp);
}
ssize_t qio_channel_preadv(QIOChannel *ioc, char *buf, size_t buflen,
off_t offset, Error **errp)
{
struct iovec iov = {
.iov_base = buf,
.iov_len = buflen
};
return qio_channel_preadv_full(ioc, &iov, 1, offset, errp);
}
int qio_channel_shutdown(QIOChannel *ioc,
QIOChannelShutdown how,
Error **errp)

130
migration/file.c Normal file
View File

@@ -0,0 +1,130 @@
#include "qemu/osdep.h"
#include "io/channel-file.h"
#include "file.h"
#include "qemu/error-report.h"
#include "migration.h"
static struct FileOutgoingArgs {
char *fname;
int flags;
int mode;
} outgoing_args;
static void qio_channel_file_connect_worker(QIOTask *task, gpointer opaque)
{
/* noop */
}
static void file_migration_cancel(Error *errp)
{
MigrationState *s;
s = migrate_get_current();
migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
MIGRATION_STATUS_FAILED);
migration_cancel(errp);
}
int file_send_channel_destroy(QIOChannel *ioc)
{
if (ioc) {
qio_channel_close(ioc, NULL);
object_unref(OBJECT(ioc));
}
g_free(outgoing_args.fname);
outgoing_args.fname = NULL;
return 0;
}
void file_send_channel_create(QIOTaskFunc f, void *data)
{
QIOChannelFile *ioc;
QIOTask *task;
Error *errp = NULL;
int flags = outgoing_args.flags;
if (migrate_use_direct_io() && qemu_has_direct_io()) {
/*
* Enable O_DIRECT for the secondary channels. These are used
* for sending ram pages and writes should be guaranteed to be
* aligned to at least page size.
*/
flags |= O_DIRECT;
}
ioc = qio_channel_file_new_path(outgoing_args.fname, flags,
outgoing_args.mode, &errp);
if (!ioc) {
file_migration_cancel(errp);
return;
}
task = qio_task_new(OBJECT(ioc), f, (gpointer)data, NULL);
qio_task_run_in_thread(task, qio_channel_file_connect_worker,
(gpointer)data, NULL, NULL);
}
void file_start_outgoing_migration(MigrationState *s, const char *fname, Error **errp)
{
QIOChannelFile *ioc;
int flags = O_CREAT | O_TRUNC | O_WRONLY;
mode_t mode = 0660;
ioc = qio_channel_file_new_path(fname, flags, mode, errp);
if (!ioc) {
error_report("Error creating migration outgoing channel");
return;
}
outgoing_args.fname = g_strdup(fname);
outgoing_args.flags = flags;
outgoing_args.mode = mode;
qio_channel_set_name(QIO_CHANNEL(ioc), "migration-file-outgoing");
migration_channel_connect(s, QIO_CHANNEL(ioc), NULL, NULL);
object_unref(OBJECT(ioc));
}
static void file_process_migration_incoming(QIOTask *task, gpointer opaque)
{
QIOChannelFile *ioc = opaque;
migration_channel_process_incoming(QIO_CHANNEL(ioc));
object_unref(OBJECT(ioc));
}
void file_start_incoming_migration(const char *fname, Error **errp)
{
QIOChannelFile *ioc;
QIOTask *task;
int channels = 1;
int i = 0, fd;
ioc = qio_channel_file_new_path(fname, O_RDONLY, 0, errp);
if (!ioc) {
goto out;
}
if (migrate_use_multifd()) {
channels += migrate_multifd_channels();
}
fd = ioc->fd;
do {
qio_channel_set_name(QIO_CHANNEL(ioc), "migration-file-incoming");
task = qio_task_new(OBJECT(ioc), file_process_migration_incoming,
(gpointer)ioc, NULL);
qio_task_run_in_thread(task, qio_channel_file_connect_worker,
(gpointer)ioc, NULL, NULL);
} while (++i < channels && (ioc = qio_channel_file_new_fd(fd)));
out:
if (!ioc) {
error_report("Error creating migration incoming channel");
return;
}
}

14
migration/file.h Normal file
View File

@@ -0,0 +1,14 @@
#ifndef QEMU_MIGRATION_FILE_H
#define QEMU_MIGRATION_FILE_H
#include "io/task.h"
#include "channel.h"
void file_start_outgoing_migration(MigrationState *s,
const char *filename,
Error **errp);
void file_start_incoming_migration(const char *fname, Error **errp);
void file_send_channel_create(QIOTaskFunc f, void *data);
int file_send_channel_destroy(QIOChannel *ioc);
#endif

View File

@@ -17,6 +17,7 @@ softmmu_ss.add(files(
'colo.c',
'exec.c',
'fd.c',
'file.c',
'global_state.c',
'migration-hmp-cmds.c',
'migration.c',

View File

@@ -344,6 +344,11 @@ void hmp_info_migrate_parameters(Monitor *mon, const QDict *qdict)
}
}
}
if (params->has_direct_io) {
monitor_printf(mon, "%s: %s\n",
MigrationParameter_str(MIGRATION_PARAMETER_DIRECT_IO),
params->direct_io ? "on" : "off");
}
}
qapi_free_MigrationParameters(params);
@@ -600,6 +605,10 @@ void hmp_migrate_set_parameter(Monitor *mon, const QDict *qdict)
error_setg(&err, "The block-bitmap-mapping parameter can only be set "
"through QMP");
break;
case MIGRATION_PARAMETER_DIRECT_IO:
p->has_direct_io = true;
visit_type_bool(v, param, &p->direct_io, &err);
break;
default:
assert(0);
}

View File

@@ -20,6 +20,7 @@
#include "migration/blocker.h"
#include "exec.h"
#include "fd.h"
#include "file.h"
#include "socket.h"
#include "sysemu/runstate.h"
#include "sysemu/sysemu.h"
@@ -167,7 +168,8 @@ INITIALIZE_MIGRATE_CAPS_SET(check_caps_background_snapshot,
MIGRATION_CAPABILITY_XBZRLE,
MIGRATION_CAPABILITY_X_COLO,
MIGRATION_CAPABILITY_VALIDATE_UUID,
MIGRATION_CAPABILITY_ZERO_COPY_SEND);
MIGRATION_CAPABILITY_ZERO_COPY_SEND,
MIGRATION_CAPABILITY_FIXED_RAM);
/* When we add fault tolerance, we could have several
migrations at once. For now we don't need to add
@@ -192,7 +194,7 @@ static bool migration_needs_multiple_sockets(void)
static bool uri_supports_multi_channels(const char *uri)
{
return strstart(uri, "tcp:", NULL) || strstart(uri, "unix:", NULL) ||
strstart(uri, "vsock:", NULL);
strstart(uri, "vsock:", NULL) || strstart(uri, "file:", NULL);
}
static bool
@@ -526,6 +528,8 @@ static void qemu_start_incoming_migration(const char *uri, Error **errp)
exec_start_incoming_migration(p, errp);
} else if (strstart(uri, "fd:", &p)) {
fd_start_incoming_migration(p, errp);
} else if (strstart(uri, "file:", &p)) {
file_start_incoming_migration(p, errp);
} else {
error_setg(errp, "unknown migration protocol: %s", uri);
}
@@ -790,6 +794,8 @@ void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp)
}
default_channel = (channel_magic == cpu_to_be32(QEMU_VM_FILE_MAGIC));
} else if (migrate_use_multifd() && migrate_fixed_ram()) {
default_channel = multifd_recv_first_channel();
} else {
default_channel = !mis->from_src_file;
}
@@ -1016,6 +1022,11 @@ MigrationParameters *qmp_query_migrate_parameters(Error **errp)
s->parameters.block_bitmap_mapping);
}
if (s->parameters.has_direct_io) {
params->has_direct_io = true;
params->direct_io = s->parameters.direct_io;
}
return params;
}
@@ -1338,6 +1349,23 @@ static bool migrate_caps_check(bool *cap_list,
}
#endif
if (cap_list[MIGRATION_CAPABILITY_FIXED_RAM]) {
if (cap_list[MIGRATION_CAPABILITY_XBZRLE]) {
error_setg(errp, "Directly mapped memory incompatible with xbzrle");
return false;
}
if (cap_list[MIGRATION_CAPABILITY_COMPRESS]) {
error_setg(errp, "Directly mapped memory incompatible with compression");
return false;
}
if (cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
error_setg(errp, "Directly mapped memory incompatible with postcopy ram");
return false;
}
}
if (cap_list[MIGRATION_CAPABILITY_POSTCOPY_RAM]) {
/* This check is reasonably expensive, so only when it's being
* set the first time, also it's only the destination that needs
@@ -1759,6 +1787,10 @@ static void migrate_params_test_apply(MigrateSetParameters *params,
dest->has_block_bitmap_mapping = true;
dest->block_bitmap_mapping = params->block_bitmap_mapping;
}
if (params->has_direct_io) {
dest->direct_io = params->direct_io;
}
}
static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
@@ -1881,6 +1913,10 @@ static void migrate_params_apply(MigrateSetParameters *params, Error **errp)
QAPI_CLONE(BitmapMigrationNodeAliasList,
params->block_bitmap_mapping);
}
if (params->has_direct_io) {
s->parameters.direct_io = params->direct_io;
}
}
void qmp_migrate_set_parameters(MigrateSetParameters *params, Error **errp)
@@ -2257,6 +2293,7 @@ void migrate_init(MigrationState *s)
error_free(s->error);
s->error = NULL;
s->hostname = NULL;
s->vmdesc = NULL;
migrate_set_state(&s->state, MIGRATION_STATUS_NONE, MIGRATION_STATUS_SETUP);
@@ -2523,6 +2560,8 @@ void qmp_migrate(const char *uri, bool has_blk, bool blk,
exec_start_outgoing_migration(s, p, &local_err);
} else if (strstart(uri, "fd:", &p)) {
fd_start_outgoing_migration(s, p, &local_err);
} else if (strstart(uri, "file:", &p)) {
file_start_outgoing_migration(s, p, &local_err);
} else {
if (!(has_resume && resume)) {
yank_unregister_instance(MIGRATION_YANK_INSTANCE);
@@ -2711,6 +2750,15 @@ bool migrate_pause_before_switchover(void)
MIGRATION_CAPABILITY_PAUSE_BEFORE_SWITCHOVER];
}
bool migrate_to_file(void)
{
MigrationState *s;
s = migrate_get_current();
return qemu_file_is_seekable(s->to_dst_file);
}
int migrate_multifd_channels(void)
{
MigrationState *s;
@@ -2730,6 +2778,16 @@ MultiFDCompression migrate_multifd_compression(void)
return s->parameters.multifd_compression;
}
int migrate_fixed_ram(void)
{
return migrate_get_current()->enabled_capabilities[MIGRATION_CAPABILITY_FIXED_RAM];
}
bool migrate_multifd_use_packets(void)
{
return !migrate_fixed_ram();
}
int migrate_multifd_zlib_level(void)
{
MigrationState *s;
@@ -2840,6 +2898,24 @@ bool migrate_postcopy_preempt(void)
return s->enabled_capabilities[MIGRATION_CAPABILITY_POSTCOPY_PREEMPT];
}
bool migrate_use_direct_io(void)
{
MigrationState *s;
/* For now O_DIRECT is only supported with fixed-ram */
if (!migrate_fixed_ram()) {
return false;
}
s = migrate_get_current();
if (s->parameters.has_direct_io) {
return s->parameters.direct_io;
}
return false;
}
/* migration thread support */
/*
* Something bad happened to the RP stream, mark an error
@@ -3777,7 +3853,7 @@ static uint64_t migration_total_bytes(MigrationState *s)
ram_counters.multifd_bytes;
}
static void migration_calculate_complete(MigrationState *s)
void migration_calculate_complete(MigrationState *s)
{
uint64_t bytes = migration_total_bytes(s);
int64_t end_time = qemu_clock_get_ms(QEMU_CLOCK_REALTIME);
@@ -3809,8 +3885,7 @@ static void update_iteration_initial_status(MigrationState *s)
s->iteration_initial_pages = ram_get_total_transferred_pages();
}
static void migration_update_counters(MigrationState *s,
int64_t current_time)
void migration_update_counters(MigrationState *s, int64_t current_time)
{
uint64_t transferred, transferred_pages, time_spent;
uint64_t current_bytes; /* bytes transferred since the beginning */
@@ -3907,6 +3982,7 @@ static void migration_iteration_finish(MigrationState *s)
case MIGRATION_STATUS_COMPLETED:
migration_calculate_complete(s);
runstate_set(RUN_STATE_POSTMIGRATE);
trace_migration_status((int)s->mbps / 8, (int)s->pages_per_second, s->total_time);
break;
case MIGRATION_STATUS_COLO:
if (!migrate_colo_enabled()) {
@@ -4318,6 +4394,20 @@ fail:
return NULL;
}
static int migrate_check_fixed_ram(MigrationState *s, Error **errp)
{
if (!s->enabled_capabilities[MIGRATION_CAPABILITY_FIXED_RAM]) {
return 0;
}
if (!qemu_file_is_seekable(s->to_dst_file)) {
error_setg(errp, "Directly mapped memory requires a seekable transport");
return -1;
}
return 0;
}
void migrate_fd_connect(MigrationState *s, Error *error_in)
{
Error *local_err = NULL;
@@ -4384,6 +4474,12 @@ void migrate_fd_connect(MigrationState *s, Error *error_in)
}
}
if (migrate_check_fixed_ram(s, &local_err) < 0) {
migrate_fd_cleanup(s);
migrate_fd_error(s, local_err);
return;
}
if (resume) {
/* Wakeup the main migration thread to do the recovery */
migrate_set_state(&s->state, MIGRATION_STATUS_POSTCOPY_PAUSED,
@@ -4513,6 +4609,7 @@ static Property migration_properties[] = {
DEFINE_PROP_STRING("tls-authz", MigrationState, parameters.tls_authz),
/* Migration capabilities */
DEFINE_PROP_MIG_CAP("x-fixed-ram", MIGRATION_CAPABILITY_FIXED_RAM),
DEFINE_PROP_MIG_CAP("x-xbzrle", MIGRATION_CAPABILITY_XBZRLE),
DEFINE_PROP_MIG_CAP("x-rdma-pin-all", MIGRATION_CAPABILITY_RDMA_PIN_ALL),
DEFINE_PROP_MIG_CAP("x-auto-converge", MIGRATION_CAPABILITY_AUTO_CONVERGE),
@@ -4600,6 +4697,7 @@ static void migration_instance_init(Object *obj)
params->has_announce_max = true;
params->has_announce_rounds = true;
params->has_announce_step = true;
params->has_direct_io = qemu_has_direct_io();
qemu_sem_init(&ms->postcopy_pause_sem, 0);
qemu_sem_init(&ms->postcopy_pause_rp_sem, 0);

View File

@@ -96,6 +96,8 @@ struct MigrationIncomingState {
bool have_listen_thread;
QemuThread listen_thread;
bool ram_migrated;
/* For the kernel to send us notifications */
int userfault_fd;
/* To notify the fault_thread to wake, e.g., when need to quit */
@@ -385,7 +387,9 @@ struct MigrationState {
};
void migrate_set_state(int *state, int old_state, int new_state);
void migration_calculate_complete(MigrationState *s);
void migration_update_counters(MigrationState *s,
int64_t current_time);
void migration_fd_process_incoming(QEMUFile *f, Error **errp);
void migration_ioc_process_incoming(QIOChannel *ioc, Error **errp);
void migration_incoming_process(void);
@@ -416,10 +420,13 @@ bool migrate_zero_blocks(void);
bool migrate_dirty_bitmaps(void);
bool migrate_ignore_shared(void);
bool migrate_validate_uuid(void);
int migrate_fixed_ram(void);
bool migrate_multifd_use_packets(void);
bool migrate_use_direct_io(void);
bool migrate_auto_converge(void);
bool migrate_use_multifd(void);
bool migrate_pause_before_switchover(void);
bool migrate_to_file(void);
int migrate_multifd_channels(void);
MultiFDCompression migrate_multifd_compression(void);
int migrate_multifd_zlib_level(void);

View File

@@ -17,6 +17,7 @@
#include "exec/ramblock.h"
#include "qemu/error-report.h"
#include "qapi/error.h"
#include "file.h"
#include "ram.h"
#include "migration.h"
#include "socket.h"
@@ -27,6 +28,7 @@
#include "threadinfo.h"
#include "qemu/yank.h"
#include "io/channel-file.h"
#include "io/channel-socket.h"
#include "yank_functions.h"
@@ -139,6 +141,7 @@ static void nocomp_recv_cleanup(MultiFDRecvParams *p)
static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
{
uint32_t flags = p->flags & MULTIFD_FLAG_COMPRESSION_MASK;
uint64_t read_base = 0;
if (flags != MULTIFD_FLAG_NOCOMP) {
error_setg(errp, "multifd %u: flags received %x flags expected %x",
@@ -149,7 +152,13 @@ static int nocomp_recv_pages(MultiFDRecvParams *p, Error **errp)
p->iov[i].iov_base = p->host + p->normal[i];
p->iov[i].iov_len = p->page_size;
}
return qio_channel_readv_all(p->c, p->iov, p->normal_num, errp);
if (migrate_fixed_ram()) {
read_base = p->pages->block->pages_offset - (uint64_t) p->host;
}
return qio_channel_read_full_all(p->c, p->iov, p->normal_num, read_base,
p->read_flags, errp);
}
static MultiFDMethods multifd_nocomp_ops = {
@@ -254,6 +263,19 @@ static void multifd_pages_clear(MultiFDPages_t *pages)
g_free(pages);
}
static void multifd_set_file_bitmap(MultiFDSendParams *p, bool set)
{
MultiFDPages_t *pages = p->pages;
if (!pages->block) {
return;
}
for (int i = 0; i < p->normal_num; i++) {
ramblock_set_shadow_bmap(pages->block, pages->offset[i], set);
}
}
static void multifd_send_fill_packet(MultiFDSendParams *p)
{
MultiFDPacket_t *packet = p->packet;
@@ -512,6 +534,15 @@ static void multifd_send_terminate_threads(Error *err)
}
}
static int multifd_send_channel_destroy(QIOChannel *send)
{
if (migrate_to_file()) {
return file_send_channel_destroy(send);
} else {
return socket_send_channel_destroy(send);
}
}
void multifd_save_cleanup(void)
{
int i;
@@ -534,7 +565,7 @@ void multifd_save_cleanup(void)
if (p->registered_yank) {
migration_ioc_unregister_yank(p->c);
}
socket_send_channel_destroy(p->c);
multifd_send_channel_destroy(p->c);
p->c = NULL;
qemu_mutex_destroy(&p->mutex);
qemu_sem_destroy(&p->sem);
@@ -597,6 +628,17 @@ int multifd_send_sync_main(QEMUFile *f)
}
}
if (!migrate_multifd_use_packets()) {
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDSendParams *p = &multifd_send_state->params[i];
qemu_sem_post(&p->sem);
continue;
}
return 0;
}
/*
* When using zero-copy, it's necessary to flush the pages before any of
* the pages can be sent again, so we'll make sure the new version of the
@@ -654,18 +696,22 @@ static void *multifd_send_thread(void *opaque)
Error *local_err = NULL;
int ret = 0;
bool use_zero_copy_send = migrate_use_zero_copy_send();
bool use_packets = migrate_multifd_use_packets();
thread = MigrationThreadAdd(p->name, qemu_get_thread_id());
trace_multifd_send_thread_start(p->id);
rcu_register_thread();
if (multifd_send_initial_packet(p, &local_err) < 0) {
ret = -1;
goto out;
if (use_packets) {
if (multifd_send_initial_packet(p, &local_err) < 0) {
ret = -1;
goto out;
}
/* initial packet */
p->num_packets = 1;
}
/* initial packet */
p->num_packets = 1;
while (true) {
qemu_sem_wait(&p->sem);
@@ -676,11 +722,12 @@ static void *multifd_send_thread(void *opaque)
qemu_mutex_lock(&p->mutex);
if (p->pending_job) {
uint64_t packet_num = p->packet_num;
uint32_t flags;
uint64_t write_base;
p->normal_num = 0;
if (use_zero_copy_send) {
if (!use_packets || use_zero_copy_send) {
p->iovs_num = 0;
} else {
p->iovs_num = 1;
@@ -698,16 +745,30 @@ static void *multifd_send_thread(void *opaque)
break;
}
}
multifd_send_fill_packet(p);
if (use_packets) {
multifd_send_fill_packet(p);
p->num_packets++;
write_base = 0;
} else {
multifd_set_file_bitmap(p, true);
/*
* If we subtract the host page now, we don't need to
* pass it into qio_channel_write_full_all() below.
*/
write_base = p->pages->block->pages_offset -
(uint64_t)p->pages->block->host;
}
flags = p->flags;
p->flags = 0;
p->num_packets++;
p->total_normal_pages += p->normal_num;
p->pages->num = 0;
p->pages->block = NULL;
qemu_mutex_unlock(&p->mutex);
trace_multifd_send(p->id, packet_num, p->normal_num, flags,
trace_multifd_send(p->id, p->packet_num, p->normal_num, flags,
p->next_packet_size);
if (use_zero_copy_send) {
@@ -717,14 +778,15 @@ static void *multifd_send_thread(void *opaque)
if (ret != 0) {
break;
}
} else {
} else if (use_packets) {
/* Send header using the same writev call */
p->iov[0].iov_len = p->packet_len;
p->iov[0].iov_base = p->packet;
}
ret = qio_channel_writev_full_all(p->c, p->iov, p->iovs_num, NULL,
0, p->write_flags, &local_err);
ret = qio_channel_write_full_all(p->c, p->iov, p->iovs_num,
write_base, NULL, 0,
p->write_flags, &local_err);
if (ret != 0) {
break;
}
@@ -740,6 +802,13 @@ static void *multifd_send_thread(void *opaque)
} else if (p->quit) {
qemu_mutex_unlock(&p->mutex);
break;
} else if (!use_packets) {
/*
* When migrating to a file there's not need for a SYNC
* packet, the channels are ready right away.
*/
qemu_sem_post(&multifd_send_state->channels_ready);
qemu_mutex_unlock(&p->mutex);
} else {
qemu_mutex_unlock(&p->mutex);
/* sometimes there are spurious wakeups */
@@ -749,6 +818,7 @@ static void *multifd_send_thread(void *opaque)
out:
if (local_err) {
trace_multifd_send_error(p->id);
multifd_set_file_bitmap(p, false);
multifd_send_terminate_threads(local_err);
error_free(local_err);
}
@@ -889,26 +959,36 @@ static void multifd_new_send_channel_cleanup(MultiFDSendParams *p,
static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque)
{
MultiFDSendParams *p = opaque;
QIOChannel *sioc = QIO_CHANNEL(qio_task_get_source(task));
QIOChannel *ioc = QIO_CHANNEL(qio_task_get_source(task));
Error *local_err = NULL;
trace_multifd_new_send_channel_async(p->id);
if (!qio_task_propagate_error(task, &local_err)) {
p->c = QIO_CHANNEL(sioc);
p->c = QIO_CHANNEL(ioc);
qio_channel_set_delay(p->c, false);
p->running = true;
if (multifd_channel_connect(p, sioc, local_err)) {
if (multifd_channel_connect(p, ioc, local_err)) {
return;
}
}
multifd_new_send_channel_cleanup(p, sioc, local_err);
multifd_new_send_channel_cleanup(p, ioc, local_err);
}
static void multifd_new_send_channel_create(gpointer opaque)
{
if (migrate_to_file()) {
file_send_channel_create(multifd_new_send_channel_async, opaque);
} else {
socket_send_channel_create(multifd_new_send_channel_async, opaque);
}
}
int multifd_save_setup(Error **errp)
{
int thread_count;
uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
bool use_packets = migrate_multifd_use_packets();
uint8_t i;
if (!migrate_use_multifd()) {
@@ -933,25 +1013,33 @@ int multifd_save_setup(Error **errp)
p->pending_job = 0;
p->id = i;
p->pages = multifd_pages_init(page_count);
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
p->packet->version = cpu_to_be32(MULTIFD_VERSION);
if (use_packets) {
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
p->packet->magic = cpu_to_be32(MULTIFD_MAGIC);
p->packet->version = cpu_to_be32(MULTIFD_VERSION);
/* We need one extra place for the packet header */
p->iov = g_new0(struct iovec, page_count + 1);
} else {
p->iov = g_new0(struct iovec, page_count);
}
p->name = g_strdup_printf("multifdsend_%d", i);
/* We need one extra place for the packet header */
p->iov = g_new0(struct iovec, page_count + 1);
p->normal = g_new0(ram_addr_t, page_count);
p->page_size = qemu_target_page_size();
p->page_count = page_count;
if (migrate_use_zero_copy_send()) {
p->write_flags = QIO_CHANNEL_WRITE_FLAG_ZERO_COPY;
} else if (!use_packets) {
p->write_flags |= QIO_CHANNEL_WRITE_FLAG_WITH_OFFSET;
} else {
p->write_flags = 0;
}
socket_send_channel_create(multifd_new_send_channel_async, p);
multifd_new_send_channel_create(p);
}
for (i = 0; i < thread_count; i++) {
@@ -970,6 +1058,8 @@ int multifd_save_setup(Error **errp)
struct {
MultiFDRecvParams *params;
/* array of pages to receive */
MultiFDPages_t *pages;
/* number of created threads */
int count;
/* syncs main thread and channels */
@@ -980,6 +1070,66 @@ struct {
MultiFDMethods *ops;
} *multifd_recv_state;
static int multifd_recv_pages(QEMUFile *f)
{
int i;
static int next_recv_channel;
MultiFDRecvParams *p = NULL;
MultiFDPages_t *pages = multifd_recv_state->pages;
/*
* next_channel can remain from a previous migration that was
* using more channels, so ensure it doesn't overflow if the
* limit is lower now.
*/
next_recv_channel %= migrate_multifd_channels();
for (i = next_recv_channel;; i = (i + 1) % migrate_multifd_channels()) {
p = &multifd_recv_state->params[i];
qemu_mutex_lock(&p->mutex);
if (p->quit) {
error_report("%s: channel %d has already quit!", __func__, i);
qemu_mutex_unlock(&p->mutex);
return -1;
}
if (!p->pending_job) {
p->pending_job++;
next_recv_channel = (i + 1) % migrate_multifd_channels();
break;
}
qemu_mutex_unlock(&p->mutex);
}
multifd_recv_state->pages = p->pages;
p->pages = pages;
qemu_mutex_unlock(&p->mutex);
qemu_sem_post(&p->sem);
return 1;
}
int multifd_recv_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset)
{
MultiFDPages_t *pages = multifd_recv_state->pages;
if (!pages->block) {
pages->block = block;
}
pages->offset[pages->num] = offset;
pages->num++;
if (pages->num < pages->allocated) {
return 1;
}
if (multifd_recv_pages(f) < 0) {
return -1;
}
return 1;
}
static void multifd_recv_terminate_threads(Error *err)
{
int i;
@@ -1001,6 +1151,7 @@ static void multifd_recv_terminate_threads(Error *err)
qemu_mutex_lock(&p->mutex);
p->quit = true;
qemu_sem_post(&p->sem);
/*
* We could arrive here for two reasons:
* - normal quit, i.e. everything went fine, just finished
@@ -1049,9 +1200,12 @@ void multifd_load_cleanup(void)
object_unref(OBJECT(p->c));
p->c = NULL;
qemu_mutex_destroy(&p->mutex);
qemu_sem_destroy(&p->sem);
qemu_sem_destroy(&p->sem_sync);
g_free(p->name);
p->name = NULL;
multifd_pages_clear(p->pages);
p->pages = NULL;
p->packet_len = 0;
g_free(p->packet);
p->packet = NULL;
@@ -1064,6 +1218,8 @@ void multifd_load_cleanup(void)
qemu_sem_destroy(&multifd_recv_state->sem_sync);
g_free(multifd_recv_state->params);
multifd_recv_state->params = NULL;
multifd_pages_clear(multifd_recv_state->pages);
multifd_recv_state->pages = NULL;
g_free(multifd_recv_state);
multifd_recv_state = NULL;
}
@@ -1075,6 +1231,18 @@ void multifd_recv_sync_main(void)
if (!migrate_use_multifd()) {
return;
}
if (!migrate_multifd_use_packets()) {
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDRecvParams *p = &multifd_recv_state->params[i];
qemu_sem_post(&p->sem);
continue;
}
return;
}
for (i = 0; i < migrate_multifd_channels(); i++) {
MultiFDRecvParams *p = &multifd_recv_state->params[i];
@@ -1099,6 +1267,7 @@ static void *multifd_recv_thread(void *opaque)
{
MultiFDRecvParams *p = opaque;
Error *local_err = NULL;
bool use_packets = migrate_multifd_use_packets();
int ret;
trace_multifd_recv_thread_start(p->id);
@@ -1106,22 +1275,45 @@ static void *multifd_recv_thread(void *opaque)
while (true) {
uint32_t flags;
p->normal_num = 0;
if (p->quit) {
break;
}
ret = qio_channel_read_all_eof(p->c, (void *)p->packet,
p->packet_len, &local_err);
if (ret == 0 || ret == -1) { /* 0: EOF -1: Error */
break;
}
if (use_packets) {
ret = qio_channel_read_all_eof(p->c, (void *)p->packet,
p->packet_len, &local_err);
if (ret == 0 || ret == -1) { /* 0: EOF -1: Error */
break;
}
qemu_mutex_lock(&p->mutex);
ret = multifd_recv_unfill_packet(p, &local_err);
if (ret) {
qemu_mutex_unlock(&p->mutex);
break;
qemu_mutex_lock(&p->mutex);
ret = multifd_recv_unfill_packet(p, &local_err);
if (ret) {
qemu_mutex_unlock(&p->mutex);
break;
}
p->num_packets++;
} else {
/*
* No packets, so we need to wait for the vmstate code to
* queue pages.
*/
qemu_sem_wait(&p->sem);
qemu_mutex_lock(&p->mutex);
if (!p->pending_job) {
qemu_mutex_unlock(&p->mutex);
break;
}
for (int i = 0; i < p->pages->num; i++) {
p->normal[p->normal_num] = p->pages->offset[i];
p->normal_num++;
}
p->pages->num = 0;
p->host = p->pages->block->host;
}
flags = p->flags;
@@ -1129,7 +1321,7 @@ static void *multifd_recv_thread(void *opaque)
p->flags &= ~MULTIFD_FLAG_SYNC;
trace_multifd_recv(p->id, p->packet_num, p->normal_num, flags,
p->next_packet_size);
p->num_packets++;
p->total_normal_pages += p->normal_num;
qemu_mutex_unlock(&p->mutex);
@@ -1144,6 +1336,13 @@ static void *multifd_recv_thread(void *opaque)
qemu_sem_post(&multifd_recv_state->sem_sync);
qemu_sem_wait(&p->sem_sync);
}
if (!use_packets) {
qemu_mutex_lock(&p->mutex);
p->pending_job--;
p->pages->block = NULL;
qemu_mutex_unlock(&p->mutex);
}
}
if (local_err) {
@@ -1164,6 +1363,7 @@ int multifd_load_setup(Error **errp)
{
int thread_count;
uint32_t page_count = MULTIFD_PACKET_SIZE / qemu_target_page_size();
bool use_packets = migrate_multifd_use_packets();
uint8_t i;
/*
@@ -1177,6 +1377,7 @@ int multifd_load_setup(Error **errp)
thread_count = migrate_multifd_channels();
multifd_recv_state = g_malloc0(sizeof(*multifd_recv_state));
multifd_recv_state->params = g_new0(MultiFDRecvParams, thread_count);
multifd_recv_state->pages = multifd_pages_init(page_count);
qatomic_set(&multifd_recv_state->count, 0);
qemu_sem_init(&multifd_recv_state->sem_sync, 0);
multifd_recv_state->ops = multifd_ops[migrate_multifd_compression()];
@@ -1185,12 +1386,20 @@ int multifd_load_setup(Error **errp)
MultiFDRecvParams *p = &multifd_recv_state->params[i];
qemu_mutex_init(&p->mutex);
qemu_sem_init(&p->sem, 0);
qemu_sem_init(&p->sem_sync, 0);
p->quit = false;
p->pending_job = 0;
p->id = i;
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
p->pages = multifd_pages_init(page_count);
if (use_packets) {
p->packet_len = sizeof(MultiFDPacket_t)
+ sizeof(uint64_t) * page_count;
p->packet = g_malloc0(p->packet_len);
} else {
p->read_flags |= QIO_CHANNEL_READ_FLAG_WITH_OFFSET;
}
p->name = g_strdup_printf("multifdrecv_%d", i);
p->iov = g_new0(struct iovec, page_count);
p->normal = g_new0(ram_addr_t, page_count);
@@ -1212,6 +1421,11 @@ int multifd_load_setup(Error **errp)
return 0;
}
bool multifd_recv_first_channel(void)
{
return !multifd_recv_state;
}
bool multifd_recv_all_channels_created(void)
{
int thread_count = migrate_multifd_channels();
@@ -1236,18 +1450,26 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
{
MultiFDRecvParams *p;
Error *local_err = NULL;
int id;
bool use_packets = migrate_multifd_use_packets();
int id, num_packets = 0;
id = multifd_recv_initial_packet(ioc, &local_err);
if (id < 0) {
multifd_recv_terminate_threads(local_err);
error_propagate_prepend(errp, local_err,
"failed to receive packet"
" via multifd channel %d: ",
qatomic_read(&multifd_recv_state->count));
return;
if (use_packets) {
id = multifd_recv_initial_packet(ioc, &local_err);
if (id < 0) {
multifd_recv_terminate_threads(local_err);
error_propagate_prepend(errp, local_err,
"failed to receive packet"
" via multifd channel %d: ",
qatomic_read(&multifd_recv_state->count));
return;
}
trace_multifd_recv_new_channel(id);
/* initial packet */
num_packets = 1;
} else {
id = qatomic_read(&multifd_recv_state->count);
}
trace_multifd_recv_new_channel(id);
p = &multifd_recv_state->params[id];
if (p->c != NULL) {
@@ -1258,9 +1480,8 @@ void multifd_recv_new_channel(QIOChannel *ioc, Error **errp)
return;
}
p->c = ioc;
p->num_packets = num_packets;
object_ref(OBJECT(ioc));
/* initial packet */
p->num_packets = 1;
p->running = true;
qemu_thread_create(&p->thread, p->name, multifd_recv_thread, p,

View File

@@ -18,11 +18,13 @@ void multifd_save_cleanup(void);
int multifd_load_setup(Error **errp);
void multifd_load_cleanup(void);
void multifd_load_shutdown(void);
bool multifd_recv_first_channel(void);
bool multifd_recv_all_channels_created(void);
void multifd_recv_new_channel(QIOChannel *ioc, Error **errp);
void multifd_recv_sync_main(void);
int multifd_send_sync_main(QEMUFile *f);
int multifd_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset);
int multifd_recv_queue_page(QEMUFile *f, RAMBlock *block, ram_addr_t offset);
/* Multifd Compression flags */
#define MULTIFD_FLAG_SYNC (1 << 0)
@@ -152,7 +154,11 @@ typedef struct {
uint32_t page_size;
/* number of pages in a full packet */
uint32_t page_count;
/* multifd flags for receiving ram */
int read_flags;
/* sem where to wait for more work */
QemuSemaphore sem;
/* syncs main thread and channels */
QemuSemaphore sem_sync;
@@ -166,6 +172,13 @@ typedef struct {
uint32_t flags;
/* global number of generated multifd packets */
uint64_t packet_num;
int pending_job;
/* array of pages to sent.
* The owner of 'pages' depends of 'pending_job' value:
* pending_job == 0 -> migration_thread can use it.
* pending_job != 0 -> multifd_channel can use it.
*/
MultiFDPages_t *pages;
/* thread local variables. No locking required */

View File

@@ -30,6 +30,7 @@
#include "qemu-file.h"
#include "trace.h"
#include "qapi/error.h"
#include "io/channel-file.h"
#define IO_BUF_SIZE 32768
#define MAX_IOV_SIZE MIN_CONST(IOV_MAX, 64)
@@ -281,6 +282,10 @@ static void qemu_iovec_release_ram(QEMUFile *f)
memset(f->may_free, 0, sizeof(f->may_free));
}
bool qemu_file_is_seekable(QEMUFile *f)
{
return qio_channel_has_feature(f->ioc, QIO_CHANNEL_FEATURE_SEEKABLE);
}
/**
* Flushes QEMUFile buffer
@@ -559,6 +564,81 @@ void qemu_put_buffer(QEMUFile *f, const uint8_t *buf, size_t size)
}
}
void qemu_put_buffer_at(QEMUFile *f, const uint8_t *buf, size_t buflen, off_t pos)
{
Error *err = NULL;
if (f->last_error) {
return;
}
qemu_fflush(f);
qio_channel_pwritev(f->ioc, (char *)buf, buflen, pos, &err);
if (err) {
qemu_file_set_error_obj(f, -EIO, err);
} else {
f->total_transferred += buflen;
}
return;
}
size_t qemu_get_buffer_at(QEMUFile *f, const uint8_t *buf, size_t buflen, off_t pos)
{
Error *err = NULL;
ssize_t ret;
if (f->last_error) {
return 0;
}
ret = qio_channel_preadv(f->ioc, (char *)buf, buflen, pos, &err);
if (ret == -1 || err) {
goto error;
}
return (size_t)ret;
error:
qemu_file_set_error_obj(f, -EIO, err);
return 0;
}
void qemu_set_offset(QEMUFile *f, off_t off, int whence)
{
Error *err = NULL;
off_t ret;
qemu_fflush(f);
if (!qemu_file_is_writable(f)) {
f->buf_index = 0;
f->buf_size = 0;
}
ret = qio_channel_io_seek(f->ioc, off, whence, &err);
if (ret == (off_t)-1) {
qemu_file_set_error_obj(f, -EIO, err);
}
}
off_t qemu_get_offset(QEMUFile *f)
{
Error *err = NULL;
off_t ret;
qemu_fflush(f);
ret = qio_channel_io_seek(f->ioc, 0, SEEK_CUR, &err);
if (ret == (off_t)-1) {
qemu_file_set_error_obj(f, -EIO, err);
}
return ret;
}
void qemu_put_byte(QEMUFile *f, int v)
{
if (f->last_error) {

View File

@@ -149,6 +149,10 @@ QEMUFile *qemu_file_get_return_path(QEMUFile *f);
void qemu_fflush(QEMUFile *f);
void qemu_file_set_blocking(QEMUFile *f, bool block);
int qemu_file_get_to_fd(QEMUFile *f, int fd, size_t size);
void qemu_set_offset(QEMUFile *f, off_t off, int whence);
off_t qemu_get_offset(QEMUFile *f);
void qemu_put_buffer_at(QEMUFile *f, const uint8_t *buf, size_t buflen, off_t pos);
size_t qemu_get_buffer_at(QEMUFile *f, const uint8_t *buf, size_t buflen, off_t pos);
void ram_control_before_iterate(QEMUFile *f, uint64_t flags);
void ram_control_after_iterate(QEMUFile *f, uint64_t flags);

View File

@@ -1310,9 +1310,14 @@ static int save_zero_page_to_file(PageSearchStatus *pss,
int len = 0;
if (buffer_is_zero(p, TARGET_PAGE_SIZE)) {
len += save_page_header(pss, block, offset | RAM_SAVE_FLAG_ZERO);
qemu_put_byte(file, 0);
len += 1;
if (migrate_fixed_ram()) {
/* for zero pages we don't need to do anything */
len = 1;
} else {
len += save_page_header(pss, block, offset | RAM_SAVE_FLAG_ZERO);
qemu_put_byte(file, 0);
len += 1;
}
ram_release_page(block->idstr, offset);
}
return len;
@@ -1394,14 +1399,20 @@ static int save_normal_page(PageSearchStatus *pss, RAMBlock *block,
{
QEMUFile *file = pss->pss_channel;
ram_transferred_add(save_page_header(pss, block,
offset | RAM_SAVE_FLAG_PAGE));
if (async) {
qemu_put_buffer_async(file, buf, TARGET_PAGE_SIZE,
migrate_release_ram() &&
migration_in_postcopy());
if (migrate_fixed_ram()) {
qemu_put_buffer_at(file, buf, TARGET_PAGE_SIZE,
block->pages_offset + offset);
set_bit(offset >> TARGET_PAGE_BITS, block->shadow_bmap);
} else {
qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
ram_transferred_add(save_page_header(pss, block,
offset | RAM_SAVE_FLAG_PAGE));
if (async) {
qemu_put_buffer_async(file, buf, TARGET_PAGE_SIZE,
migrate_release_ram() &&
migration_in_postcopy());
} else {
qemu_put_buffer(file, buf, TARGET_PAGE_SIZE);
}
}
ram_transferred_add(TARGET_PAGE_SIZE);
stat64_add(&ram_atomic_counters.normal, 1);
@@ -2731,6 +2742,8 @@ static void ram_save_cleanup(void *opaque)
block->clear_bmap = NULL;
g_free(block->bmap);
block->bmap = NULL;
g_free(block->shadow_bmap);
block->shadow_bmap = NULL;
}
xbzrle_cleanup();
@@ -3098,6 +3111,7 @@ static void ram_list_init_bitmaps(void)
*/
block->bmap = bitmap_new(pages);
bitmap_set(block->bmap, 0, pages);
block->shadow_bmap = bitmap_new(block->used_length >> TARGET_PAGE_BITS);
block->clear_bmap_shift = shift;
block->clear_bmap = bitmap_new(clear_bmap_size(pages, shift));
}
@@ -3287,6 +3301,33 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
if (migrate_ignore_shared()) {
qemu_put_be64(f, block->mr->addr);
}
if (migrate_fixed_ram()) {
long num_pages = block->used_length >> TARGET_PAGE_BITS;
long bitmap_size = BITS_TO_LONGS(num_pages) * sizeof(unsigned long);
/* Needed for external programs (think analyze-migration.py) */
qemu_put_be32(f, bitmap_size);
/*
* The bitmap starts after pages_offset, so add 8 to
* account for the pages_offset size.
*/
block->bitmap_offset = qemu_get_offset(f) + 8;
/*
* Make pages_offset aligned to 1 MiB to account for
* migration file movement between filesystems with
* possibly different alignment restrictions when
* using O_DIRECT.
*/
block->pages_offset = ROUND_UP(block->bitmap_offset +
bitmap_size, 0x100000);
qemu_put_be64(f, block->pages_offset);
/* Now prepare offset for next ramblock */
qemu_set_offset(f, block->pages_offset + block->used_length, SEEK_SET);
}
}
}
@@ -3306,6 +3347,27 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
return 0;
}
static void ram_save_shadow_bmap(QEMUFile *f)
{
RAMBlock *block;
RAMBLOCK_FOREACH_MIGRATABLE(block) {
long num_pages = block->used_length >> TARGET_PAGE_BITS;
long bitmap_size = BITS_TO_LONGS(num_pages) * sizeof(unsigned long);
qemu_put_buffer_at(f, (uint8_t *)block->shadow_bmap, bitmap_size,
block->bitmap_offset);
}
}
void ramblock_set_shadow_bmap(RAMBlock *block, ram_addr_t offset, bool set)
{
if (set) {
set_bit(offset >> TARGET_PAGE_BITS, block->shadow_bmap);
} else {
clear_bit(offset >> TARGET_PAGE_BITS, block->shadow_bmap);
}
}
/**
* ram_save_iterate: iterative stage for migration
*
@@ -3413,9 +3475,15 @@ out:
return ret;
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
ram_transferred_add(8);
/*
* For fixed ram we don't want to pollute the migration stream with
* EOS flags.
*/
if (!migrate_fixed_ram()) {
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
ram_transferred_add(8);
}
ret = qemu_file_get_error(f);
}
@@ -3461,6 +3529,9 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
pages = ram_find_and_save_block(rs);
/* no more blocks to sent */
if (pages == 0) {
if (migrate_fixed_ram()) {
ram_save_shadow_bmap(f);
}
break;
}
if (pages < 0) {
@@ -3483,8 +3554,10 @@ static int ram_save_complete(QEMUFile *f, void *opaque)
return ret;
}
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
if (!migrate_fixed_ram()) {
qemu_put_be64(f, RAM_SAVE_FLAG_EOS);
qemu_fflush(f);
}
return 0;
}
@@ -4255,6 +4328,182 @@ void colo_flush_ram_cache(void)
trace_colo_flush_ram_cache_end();
}
static int parse_ramblock(QEMUFile *f, RAMBlock *block, ram_addr_t length)
{
int ret = 0;
/* ADVISE is earlier, it shows the source has the postcopy capability on */
bool postcopy_advised = migration_incoming_postcopy_advised();
assert(block);
if (!qemu_ram_is_migratable(block)) {
error_report("block %s should not be migrated !", block->idstr);
ret = -EINVAL;
}
if (length != block->used_length) {
Error *local_err = NULL;
ret = qemu_ram_resize(block, length, &local_err);
if (local_err) {
error_report_err(local_err);
}
}
/* For postcopy we need to check hugepage sizes match */
if (postcopy_advised && migrate_postcopy_ram() &&
block->page_size != qemu_host_page_size) {
uint64_t remote_page_size = qemu_get_be64(f);
if (remote_page_size != block->page_size) {
error_report("Mismatched RAM page size %s "
"(local) %zd != %" PRId64, block->idstr,
block->page_size, remote_page_size);
ret = -EINVAL;
}
}
if (migrate_ignore_shared()) {
hwaddr addr = qemu_get_be64(f);
if (ramblock_is_ignored(block) &&
block->mr->addr != addr) {
error_report("Mismatched GPAs for block %s "
"%" PRId64 "!= %" PRId64, block->idstr,
(uint64_t)addr,
(uint64_t)block->mr->addr);
ret = -EINVAL;
}
}
ram_control_load_hook(f, RAM_CONTROL_BLOCK_REG, block->idstr);
return ret;
}
static int parse_ramblocks(QEMUFile *f, ram_addr_t total_ram_bytes)
{
int ret = 0;
/* Synchronize RAM block list */
while (!ret && total_ram_bytes) {
char id[256];
RAMBlock *block;
ram_addr_t length;
int len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)id, len);
id[len] = 0;
length = qemu_get_be64(f);
block = qemu_ram_block_by_name(id);
if (block) {
ret = parse_ramblock(f, block, length);
} else {
error_report("Unknown ramblock \"%s\", cannot accept "
"migration", id);
ret = -EINVAL;
}
total_ram_bytes -= length;
}
return ret;
}
static void read_ramblock_fixed_ram(QEMUFile *f, RAMBlock *block,
long num_pages, unsigned long *bitmap)
{
unsigned long set_bit_idx, clear_bit_idx;
unsigned long len;
ram_addr_t offset;
void *host;
size_t read, completed, read_len;
for (set_bit_idx = find_first_bit(bitmap, num_pages);
set_bit_idx < num_pages;
set_bit_idx = find_next_bit(bitmap, num_pages, clear_bit_idx + 1)) {
clear_bit_idx = find_next_zero_bit(bitmap, num_pages, set_bit_idx + 1);
len = TARGET_PAGE_SIZE * (clear_bit_idx - set_bit_idx);
offset = set_bit_idx << TARGET_PAGE_BITS;
for (read = 0, completed = 0; completed < len; offset += read) {
host = host_from_ram_block_offset(block, offset);
read_len = MIN(len, TARGET_PAGE_SIZE);
if (migrate_use_multifd()) {
multifd_recv_queue_page(f, block, offset);
read = read_len;
} else {
read = qemu_get_buffer_at(f, host, read_len,
block->pages_offset + offset);
}
completed += read;
}
}
}
static int parse_ramblocks_fixed_ram(QEMUFile *f)
{
int ret = 0;
while (!ret) {
char id[256];
RAMBlock *block;
ram_addr_t length;
long num_pages, bitmap_size;
int len = qemu_get_byte(f);
g_autofree unsigned long *dirty_bitmap = NULL;
qemu_get_buffer(f, (uint8_t *)id, len);
id[len] = 0;
length = qemu_get_be64(f);
block = qemu_ram_block_by_name(id);
if (block) {
ret = parse_ramblock(f, block, length);
if (ret < 0) {
return ret;
}
} else {
error_report("Unknown ramblock \"%s\", cannot accept "
"migration", id);
ret = -EINVAL;
continue;
}
/* 1. read the bitmap size */
num_pages = length >> TARGET_PAGE_BITS;
bitmap_size = qemu_get_be32(f);
assert(bitmap_size == BITS_TO_LONGS(num_pages) * sizeof(unsigned long));
block->pages_offset = qemu_get_be64(f);
/* 2. read the actual bitmap */
dirty_bitmap = g_malloc0(bitmap_size);
if (qemu_get_buffer(f, (uint8_t *)dirty_bitmap, bitmap_size) != bitmap_size) {
error_report("Error parsing dirty bitmap");
return -EINVAL;
}
read_ramblock_fixed_ram(f, block, num_pages, dirty_bitmap);
/* Skip pages array */
qemu_set_offset(f, block->pages_offset + length, SEEK_SET);
/* Check if this is the last ramblock */
if (qemu_get_be64(f) == RAM_SAVE_FLAG_EOS) {
ret = 1;
} else {
/*
* If not, adjust the internal file index to account for the
* previous 64 bit read
*/
qemu_file_skip(f, -8);
ret = 0;
}
}
return ret;
}
/**
* ram_load_precopy: load pages in precopy case
*
@@ -4269,14 +4518,13 @@ static int ram_load_precopy(QEMUFile *f)
{
MigrationIncomingState *mis = migration_incoming_get_current();
int flags = 0, ret = 0, invalid_flags = 0, len = 0, i = 0;
/* ADVISE is earlier, it shows the source has the postcopy capability on */
bool postcopy_advised = migration_incoming_postcopy_advised();
if (!migrate_use_compression()) {
invalid_flags |= RAM_SAVE_FLAG_COMPRESS_PAGE;
}
while (!ret && !(flags & RAM_SAVE_FLAG_EOS)) {
ram_addr_t addr, total_ram_bytes;
while (!ret && !(flags & RAM_SAVE_FLAG_EOS) && !mis->ram_migrated) {
ram_addr_t addr;
void *host = NULL, *host_bak = NULL;
uint8_t ch;
@@ -4347,64 +4595,13 @@ static int ram_load_precopy(QEMUFile *f)
switch (flags & ~RAM_SAVE_FLAG_CONTINUE) {
case RAM_SAVE_FLAG_MEM_SIZE:
/* Synchronize RAM block list */
total_ram_bytes = addr;
while (!ret && total_ram_bytes) {
RAMBlock *block;
char id[256];
ram_addr_t length;
len = qemu_get_byte(f);
qemu_get_buffer(f, (uint8_t *)id, len);
id[len] = 0;
length = qemu_get_be64(f);
block = qemu_ram_block_by_name(id);
if (block && !qemu_ram_is_migratable(block)) {
error_report("block %s should not be migrated !", id);
ret = -EINVAL;
} else if (block) {
if (length != block->used_length) {
Error *local_err = NULL;
ret = qemu_ram_resize(block, length,
&local_err);
if (local_err) {
error_report_err(local_err);
}
}
/* For postcopy we need to check hugepage sizes match */
if (postcopy_advised && migrate_postcopy_ram() &&
block->page_size != qemu_host_page_size) {
uint64_t remote_page_size = qemu_get_be64(f);
if (remote_page_size != block->page_size) {
error_report("Mismatched RAM page size %s "
"(local) %zd != %" PRId64,
id, block->page_size,
remote_page_size);
ret = -EINVAL;
}
}
if (migrate_ignore_shared()) {
hwaddr addr = qemu_get_be64(f);
if (ramblock_is_ignored(block) &&
block->mr->addr != addr) {
error_report("Mismatched GPAs for block %s "
"%" PRId64 "!= %" PRId64,
id, (uint64_t)addr,
(uint64_t)block->mr->addr);
ret = -EINVAL;
}
}
ram_control_load_hook(f, RAM_CONTROL_BLOCK_REG,
block->idstr);
} else {
error_report("Unknown ramblock \"%s\", cannot "
"accept migration", id);
ret = -EINVAL;
if (migrate_fixed_ram()) {
ret = parse_ramblocks_fixed_ram(f);
if (ret == 1) {
mis->ram_migrated = true;
}
total_ram_bytes -= length;
} else {
ret = parse_ramblocks(f, addr);
}
break;

View File

@@ -98,6 +98,7 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb);
bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t start);
void postcopy_preempt_shutdown_file(MigrationState *s);
void *postcopy_preempt_thread(void *opaque);
void ramblock_set_shadow_bmap(RAMBlock *block, ram_addr_t offset, bool set);
/* ram cache */
int colo_init_ram_cache(void);

View File

@@ -241,6 +241,7 @@ static bool should_validate_capability(int capability)
/* Validate only new capabilities to keep compatibility. */
switch (capability) {
case MIGRATION_CAPABILITY_X_IGNORE_SHARED:
case MIGRATION_CAPABILITY_FIXED_RAM:
return true;
default:
return false;
@@ -1206,13 +1207,25 @@ void qemu_savevm_non_migratable_list(strList **reasons)
void qemu_savevm_state_header(QEMUFile *f)
{
MigrationState *s = migrate_get_current();
s->vmdesc = json_writer_new(false);
trace_savevm_state_header();
qemu_put_be32(f, QEMU_VM_FILE_MAGIC);
qemu_put_be32(f, QEMU_VM_FILE_VERSION);
if (migrate_get_current()->send_configuration) {
if (s->send_configuration) {
qemu_put_byte(f, QEMU_VM_CONFIGURATION);
vmstate_save_state(f, &vmstate_configuration, &savevm_state, 0);
/*
* This starts the main json object and is paired with the
* json_writer_end_object in
* qemu_savevm_state_complete_precopy_non_iterable
*/
json_writer_start_object(s->vmdesc, NULL);
json_writer_start_object(s->vmdesc, "configuration");
vmstate_save_state(f, &vmstate_configuration, &savevm_state, s->vmdesc);
json_writer_end_object(s->vmdesc);
}
}
@@ -1237,8 +1250,6 @@ void qemu_savevm_state_setup(QEMUFile *f)
Error *local_err = NULL;
int ret;
ms->vmdesc = json_writer_new(false);
json_writer_start_object(ms->vmdesc, NULL);
json_writer_int64(ms->vmdesc, "page_size", qemu_target_page_size());
json_writer_start_array(ms->vmdesc, "devices");
@@ -1627,6 +1638,7 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp)
qemu_mutex_lock_iothread();
while (qemu_file_get_error(f) == 0) {
migration_update_counters(ms, qemu_clock_get_ms(QEMU_CLOCK_REALTIME));
if (qemu_savevm_state_iterate(f, false) > 0) {
break;
}
@@ -1649,6 +1661,9 @@ static int qemu_savevm_state(QEMUFile *f, Error **errp)
}
migrate_set_state(&ms->state, MIGRATION_STATUS_SETUP, status);
migration_calculate_complete(ms);
trace_migration_status((int)ms->mbps / 8, (int)ms->pages_per_second, ms->total_time);
/* f is outer parameter, it should not stay in global migration state after
* this function finished */
ms->to_dst_file = NULL;

View File

@@ -165,6 +165,7 @@ migration_return_path_end_after(int rp_error) "%d"
migration_thread_after_loop(void) ""
migration_thread_file_err(void) ""
migration_thread_setup_complete(void) ""
migration_status(int mpbs, int pages_per_second, int64_t total_time) "%d MB/s, %d pages/s, %ld ms"
open_return_path_on_source(void) ""
open_return_path_on_source_continue(void) ""
postcopy_start(void) ""

View File

@@ -485,7 +485,7 @@
##
{ 'enum': 'MigrationCapability',
'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
'compress', 'events', 'postcopy-ram',
'compress', 'events', 'postcopy-ram', 'fixed-ram',
{ 'name': 'x-colo', 'features': [ 'unstable' ] },
'release-ram',
'block', 'return-path', 'pause-before-switchover', 'multifd',
@@ -776,6 +776,9 @@
# block device name if there is one, and to their node name
# otherwise. (Since 5.2)
#
# @direct-io: Open migration files with O_DIRECT when possible. Not
# all migration transports support this. (since 8.1)
#
# Features:
# @unstable: Member @x-checkpoint-delay is experimental.
#
@@ -796,7 +799,7 @@
'xbzrle-cache-size', 'max-postcopy-bandwidth',
'max-cpu-throttle', 'multifd-compression',
'multifd-zlib-level' ,'multifd-zstd-level',
'block-bitmap-mapping' ] }
'block-bitmap-mapping', 'direct-io' ] }
##
# @MigrateSetParameters:
@@ -941,6 +944,9 @@
# block device name if there is one, and to their node name
# otherwise. (Since 5.2)
#
# @direct-io: Open migration files with O_DIRECT when possible. Not
# all migration transports support this. (since 8.1)
#
# Features:
# @unstable: Member @x-checkpoint-delay is experimental.
#
@@ -976,7 +982,8 @@
'*multifd-compression': 'MultiFDCompression',
'*multifd-zlib-level': 'uint8',
'*multifd-zstd-level': 'uint8',
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
'*direct-io': 'bool' } }
##
# @migrate-set-parameters:
@@ -1141,6 +1148,9 @@
# block device name if there is one, and to their node name
# otherwise. (Since 5.2)
#
# @direct-io: Open migration files with O_DIRECT when possible. Not
# all migration transports support this. (since 8.1)
#
# Features:
# @unstable: Member @x-checkpoint-delay is experimental.
#
@@ -1174,7 +1184,8 @@
'*multifd-compression': 'MultiFDCompression',
'*multifd-zlib-level': 'uint8',
'*multifd-zstd-level': 'uint8',
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ] } }
'*block-bitmap-mapping': [ 'BitmapMigrationNodeAlias' ],
'*direct-io': 'bool' } }
##
# @query-migrate-parameters:

View File

@@ -23,7 +23,7 @@ import argparse
import collections
import struct
import sys
import math
def mkdir_p(path):
try:
@@ -119,11 +119,16 @@ class RamSection(object):
self.file = file
self.section_key = section_key
self.TARGET_PAGE_SIZE = ramargs['page_size']
self.TARGET_PAGE_BITS = math.log2(self.TARGET_PAGE_SIZE)
self.dump_memory = ramargs['dump_memory']
self.write_memory = ramargs['write_memory']
self.fixed_ram = ramargs['fixed-ram']
self.sizeinfo = collections.OrderedDict()
self.bitmap_offset = collections.OrderedDict()
self.pages_offset = collections.OrderedDict()
self.data = collections.OrderedDict()
self.data['section sizes'] = self.sizeinfo
self.ram_read = False
self.name = ''
if self.write_memory:
self.files = { }
@@ -140,7 +145,13 @@ class RamSection(object):
def getDict(self):
return self.data
def write_or_dump_fixed_ram(self):
pass
def read(self):
if self.fixed_ram and self.ram_read:
return
# Read all RAM sections
while True:
addr = self.file.read64()
@@ -167,7 +178,26 @@ class RamSection(object):
f.truncate(0)
f.truncate(len)
self.files[self.name] = f
if self.fixed_ram:
bitmap_len = self.file.read32()
# skip the pages_offset which we don't need
offset = self.file.tell() + 8
self.bitmap_offset[self.name] = offset
offset = ((offset + bitmap_len + self.TARGET_PAGE_SIZE - 1) //
self.TARGET_PAGE_SIZE) * self.TARGET_PAGE_SIZE
self.pages_offset[self.name] = offset
self.file.file.seek(offset + len)
flags &= ~self.RAM_SAVE_FLAG_MEM_SIZE
if self.fixed_ram:
self.ram_read = True
# now we should rewind to the ram page offset of the first
# ram section
if self.fixed_ram:
if self.write_memory or self.dump_memory:
self.write_or_dump_fixed_ram()
return
if flags & self.RAM_SAVE_FLAG_COMPRESS:
if flags & self.RAM_SAVE_FLAG_CONTINUE:
@@ -208,7 +238,7 @@ class RamSection(object):
# End of RAM section
if flags & self.RAM_SAVE_FLAG_EOS:
break
return
if flags != 0:
raise Exception("Unknown RAM flags: %x" % flags)
@@ -521,6 +551,7 @@ class MigrationDump(object):
ramargs['page_size'] = self.vmsd_desc['page_size']
ramargs['dump_memory'] = dump_memory
ramargs['write_memory'] = write_memory
ramargs['fixed-ram'] = False
self.section_classes[('ram',0)][1] = ramargs
while True:
@@ -528,8 +559,20 @@ class MigrationDump(object):
if section_type == self.QEMU_VM_EOF:
break
elif section_type == self.QEMU_VM_CONFIGURATION:
section = ConfigurationSection(file)
section.read()
config_desc = self.vmsd_desc.get('configuration')
if config_desc is not None:
config = VMSDSection(file, 1, config_desc, 'configuration')
config.read()
caps = config.data.get("configuration/capabilities")
if caps is not None:
caps = caps.data["capabilities"]
if type(caps) != list:
caps = [caps]
for i in caps:
# chomp out string length
cap = i.data[1:].decode("utf8")
if cap == "fixed-ram":
ramargs['fixed-ram'] = True
elif section_type == self.QEMU_VM_SECTION_START or section_type == self.QEMU_VM_SECTION_FULL:
section_id = file.read32()
name = file.readstr()

View File

@@ -35,10 +35,11 @@ from qemu.machine import QEMUMachine
class Engine(object):
def __init__(self, binary, dst_host, kernel, initrd, transport="tcp",
sleep=15, verbose=False, debug=False):
sleep=15, verbose=False, debug=False, dst_file="/tmp/migfile"):
self._binary = binary # Path to QEMU binary
self._dst_host = dst_host # Hostname of target host
self._dst_file = dst_file # Path to file (for file transport)
self._kernel = kernel # Path to kernel image
self._initrd = initrd # Path to stress initrd
self._transport = transport # 'unix' or 'tcp' or 'rdma'
@@ -203,6 +204,23 @@ class Engine(object):
resp = dst.command("migrate-set-parameters",
multifd_channels=scenario._multifd_channels)
if scenario._fixed_ram:
resp = src.command("migrate-set-capabilities",
capabilities = [
{ "capability": "fixed-ram",
"state": True }
])
resp = dst.command("migrate-set-capabilities",
capabilities = [
{ "capability": "fixed-ram",
"state": True }
])
if scenario._direct_io:
resp = src.command("migrate-set-parameters",
direct_io=scenario._direct_io)
resp = src.command("migrate", uri=connect_uri)
post_copy = False
@@ -233,6 +251,12 @@ class Engine(object):
progress_history.append(progress)
if progress._status == "completed":
print("Completed")
if connect_uri[0:5] == "file:":
if self._verbose:
print("Migrating incoming")
dst.command("migrate-incoming", uri=connect_uri)
if self._verbose:
print("Sleeping %d seconds for final guest workload run" % self._sleep)
sleep_secs = self._sleep
@@ -357,7 +381,11 @@ class Engine(object):
if self._dst_host != "localhost":
tunnelled = True
argv = self._get_common_args(hardware, tunnelled)
return argv + ["-incoming", uri]
incoming = ["-incoming", uri]
if uri[0:5] == "file:":
incoming = ["-incoming", "defer"]
return argv + incoming
@staticmethod
def _get_common_wrapper(cpu_bind, mem_bind):
@@ -417,6 +445,10 @@ class Engine(object):
os.remove(monaddr)
except:
pass
elif self._transport == "file":
if self._dst_host != "localhost":
raise Exception("Use unix migration transport for non-local host")
uri = "file:%s" % self._dst_file
if self._dst_host != "localhost":
dstmonaddr = ("localhost", 9001)
@@ -453,6 +485,9 @@ class Engine(object):
if self._dst_host == "localhost" and os.path.exists(dstmonaddr):
os.remove(dstmonaddr)
if uri[0:5] == "file:" and os.path.exists(uri[5:]):
os.remove(uri[5:])
if self._verbose:
print("Finished migration")

View File

@@ -30,7 +30,8 @@ class Scenario(object):
auto_converge=False, auto_converge_step=10,
compression_mt=False, compression_mt_threads=1,
compression_xbzrle=False, compression_xbzrle_cache=10,
multifd=False, multifd_channels=2):
multifd=False, multifd_channels=2,
fixed_ram=False, direct_io=False):
self._name = name
@@ -60,6 +61,11 @@ class Scenario(object):
self._multifd = multifd
self._multifd_channels = multifd_channels
self._fixed_ram = fixed_ram
self._direct_io = direct_io
def serialize(self):
return {
"name": self._name,
@@ -79,6 +85,8 @@ class Scenario(object):
"compression_xbzrle_cache": self._compression_xbzrle_cache,
"multifd": self._multifd,
"multifd_channels": self._multifd_channels,
"fixed_ram": self._fixed_ram,
"direct_io": self._direct_io,
}
@classmethod
@@ -100,4 +108,6 @@ class Scenario(object):
data["compression_xbzrle"],
data["compression_xbzrle_cache"],
data["multifd"],
data["multifd_channels"])
data["multifd_channels"],
data["fixed_ram"],
data["direct_io"])

View File

@@ -48,6 +48,7 @@ class BaseShell(object):
parser.add_argument("--kernel", dest="kernel", default="/boot/vmlinuz-%s" % platform.release())
parser.add_argument("--initrd", dest="initrd", default="tests/migration/initrd-stress.img")
parser.add_argument("--transport", dest="transport", default="unix")
parser.add_argument("--dst-file", dest="dst_file")
# Hardware args
@@ -71,7 +72,8 @@ class BaseShell(object):
transport=args.transport,
sleep=args.sleep,
debug=args.debug,
verbose=args.verbose)
verbose=args.verbose,
dst_file=args.dst_file)
def get_hardware(self, args):
def split_map(value):
@@ -127,6 +129,13 @@ class Shell(BaseShell):
parser.add_argument("--multifd-channels", dest="multifd_channels",
default=2, type=int)
parser.add_argument("--fixed-ram", dest="fixed_ram", default=False,
action="store_true")
parser.add_argument("--direct-io", dest="direct_io", default=False,
action="store_true")
def get_scenario(self, args):
return Scenario(name="perfreport",
downtime=args.downtime,
@@ -150,7 +159,12 @@ class Shell(BaseShell):
compression_xbzrle_cache=args.compression_xbzrle_cache,
multifd=args.multifd,
multifd_channels=args.multifd_channels)
multifd_channels=args.multifd_channels,
fixed_ram=args.fixed_ram,
direct_io=args.direct_io)
def run(self, argv):
args = self._parser.parse_args(argv)

View File

@@ -130,6 +130,25 @@ void migrate_qmp(QTestState *who, const char *uri, const char *fmt, ...)
qobject_unref(rsp);
}
void migrate_incoming_qmp(QTestState *who, const char *uri, const char *fmt, ...)
{
va_list ap;
QDict *args, *rsp;
va_start(ap, fmt);
args = qdict_from_vjsonf_nofail(fmt, ap);
va_end(ap);
g_assert(!qdict_haskey(args, "uri"));
qdict_put_str(args, "uri", uri);
rsp = qtest_qmp(who, "{ 'execute': 'migrate-incoming', 'arguments': %p}", args);
g_assert(qdict_haskey(rsp, "return"));
qobject_unref(rsp);
}
/*
* Note: caller is responsible to free the returned object via
* qobject_unref() after use

View File

@@ -31,6 +31,10 @@ QDict *qmp_command(QTestState *who, const char *command, ...);
G_GNUC_PRINTF(3, 4)
void migrate_qmp(QTestState *who, const char *uri, const char *fmt, ...);
G_GNUC_PRINTF(3, 4)
void migrate_incoming_qmp(QTestState *who, const char *uri,
const char *fmt, ...);
QDict *migrate_query(QTestState *who);
QDict *migrate_query_not_failed(QTestState *who);

View File

@@ -748,6 +748,7 @@ static void test_migrate_end(QTestState *from, QTestState *to, bool test_dest)
cleanup("migsocket");
cleanup("src_serial");
cleanup("dest_serial");
cleanup("migfile");
}
#ifdef CONFIG_GNUTLS
@@ -1371,6 +1372,14 @@ static void test_precopy_common(MigrateCommon *args)
* hanging forever if migration didn't converge */
wait_for_migration_complete(from);
/*
* For file based migration the target must begin its migration after
* the source has finished
*/
if (args->connect_uri && strstr(args->connect_uri, "file:")) {
migrate_incoming_qmp(to, args->connect_uri, "{}");
}
if (!got_stop) {
qtest_qmp_eventwait(from, "STOP");
}
@@ -1524,6 +1533,62 @@ static void test_precopy_unix_xbzrle(void)
test_precopy_common(&args);
}
static void test_precopy_file_stream_ram(void)
{
g_autofree char *uri = g_strdup_printf("file:%s/migfile", tmpfs);
MigrateCommon args = {
.connect_uri = uri,
.listen_uri = "defer",
};
test_precopy_common(&args);
}
static void *migrate_fixed_ram_start(QTestState *from, QTestState *to)
{
migrate_set_capability(from, "fixed-ram", true);
migrate_set_capability(to, "fixed-ram", true);
return NULL;
}
static void test_precopy_file_fixed_ram(void)
{
g_autofree char *uri = g_strdup_printf("file:%s/migfile", tmpfs);
MigrateCommon args = {
.connect_uri = uri,
.listen_uri = "defer",
.start_hook = migrate_fixed_ram_start,
};
test_precopy_common(&args);
}
static void *migrate_multifd_fixed_ram_start(QTestState *from, QTestState *to)
{
migrate_fixed_ram_start(from, to);
migrate_set_parameter_int(from, "multifd-channels", 4);
migrate_set_parameter_int(to, "multifd-channels", 4);
migrate_set_capability(from, "multifd", true);
migrate_set_capability(to, "multifd", true);
return NULL;
}
static void test_multifd_file_fixed_ram(void)
{
g_autofree char *uri = g_strdup_printf("file:%s/migfile", tmpfs);
MigrateCommon args = {
.connect_uri = uri,
.listen_uri = "defer",
.start_hook = migrate_multifd_fixed_ram_start,
};
test_precopy_common(&args);
}
static void test_precopy_tcp_plain(void)
{
MigrateCommon args = {
@@ -2515,6 +2580,14 @@ int main(int argc, char **argv)
qtest_add_func("/migration/bad_dest", test_baddest);
qtest_add_func("/migration/precopy/unix/plain", test_precopy_unix_plain);
qtest_add_func("/migration/precopy/unix/xbzrle", test_precopy_unix_xbzrle);
qtest_add_func("/migration/precopy/file/stream-ram",
test_precopy_file_stream_ram);
qtest_add_func("/migration/precopy/file/fixed-ram",
test_precopy_file_fixed_ram);
qtest_add_func("/migration/multifd/file/fixed-ram",
test_multifd_file_fixed_ram);
#ifdef CONFIG_GNUTLS
qtest_add_func("/migration/precopy/unix/tls/psk",
test_precopy_unix_tls_psk);

View File

@@ -277,6 +277,15 @@ int qemu_lock_fd_test(int fd, int64_t start, int64_t len, bool exclusive)
}
#endif
bool qemu_has_direct_io(void)
{
#ifdef O_DIRECT
return true;
#else
return false;
#endif
}
static int qemu_open_cloexec(const char *name, int flags, mode_t mode)
{
int ret;