Accepting request 734440 from home:bfrogers:branches:Virtualization

Add in upstream stable patches. Also a new more minor tweaks.

OBS-URL: https://build.opensuse.org/request/show/734440
OBS-URL: https://build.opensuse.org/package/show/Virtualization/qemu?expand=0&rev=492
This commit is contained in:
Bruce Rogers 2019-10-02 02:17:15 +00:00 committed by Git OBS Bridge
parent 5d1ac3f151
commit 36ac654a1b
28 changed files with 1582 additions and 83 deletions

View File

@ -0,0 +1,93 @@
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Date: Thu, 12 Sep 2019 00:08:49 +0200
Subject: block/create: Do not abort if a block driver is not available
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: d90d5cae2b10efc0e8d0b3cc91ff16201853d3ba
The 'blockdev-create' QMP command was introduced as experimental
feature in commit b0292b851b8, using the assert() debug call.
It got promoted to 'stable' command in 3fb588a0f2c, but the
assert call was not removed.
Some block drivers are optional, and bdrv_find_format() might
return a NULL value, triggering the assertion.
Stable code is not expected to abort, so return an error instead.
This is easily reproducible when libnfs is not installed:
./configure
[...]
module support no
Block whitelist (rw)
Block whitelist (ro)
libiscsi support yes
libnfs support no
[...]
Start QEMU:
$ qemu-system-x86_64 -S -qmp unix:/tmp/qemu.qmp,server,nowait
Send the 'blockdev-create' with the 'nfs' driver:
$ ( cat << 'EOF'
{'execute': 'qmp_capabilities'}
{'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
EOF
) | socat STDIO UNIX:/tmp/qemu.qmp
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 1, "major": 4}, "package": "v4.1.0-733-g89ea03a7dc"}, "capabilities": ["oob"]}}
{"return": {}}
QEMU crashes:
$ gdb qemu-system-x86_64 core
Program received signal SIGSEGV, Segmentation fault.
(gdb) bt
#0 0x00007ffff510957f in raise () at /lib64/libc.so.6
#1 0x00007ffff50f3895 in abort () at /lib64/libc.so.6
#2 0x00007ffff50f3769 in _nl_load_domain.cold.0 () at /lib64/libc.so.6
#3 0x00007ffff5101a26 in .annobin_assert.c_end () at /lib64/libc.so.6
#4 0x0000555555d7e1f1 in qmp_blockdev_create (job_id=0x555556baee40 "x", options=0x555557666610, errp=0x7fffffffc770) at block/create.c:69
#5 0x0000555555c96b52 in qmp_marshal_blockdev_create (args=0x7fffdc003830, ret=0x7fffffffc7f8, errp=0x7fffffffc7f0) at qapi/qapi-commands-block-core.c:1314
#6 0x0000555555deb0a0 in do_qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false, errp=0x7fffffffc898) at qapi/qmp-dispatch.c:131
#7 0x0000555555deb2a1 in qmp_dispatch (cmds=0x55555645de70 <qmp_commands>, request=0x7fffdc005c70, allow_oob=false) at qapi/qmp-dispatch.c:174
With this patch applied, QEMU returns a QMP error:
{'execute': 'blockdev-create', 'arguments': {'job-id': 'x', 'options': {'size': 0, 'driver': 'nfs', 'location': {'path': '/', 'server': {'host': '::1', 'type': 'inet'}}}}, 'id': 'x'}
{"id": "x", "error": {"class": "GenericError", "desc": "Block driver 'nfs' not found or not supported"}}
Cc: qemu-stable@nongnu.org
Reported-by: Xu Tian <xutian@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/create.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/create.c b/block/create.c
index 95341219efcd670a5151d0d3f4f5..de5e97bb186ffdf039fb39980874 100644
--- a/block/create.c
+++ b/block/create.c
@@ -63,9 +63,13 @@ void qmp_blockdev_create(const char *job_id, BlockdevCreateOptions *options,
const char *fmt = BlockdevDriver_str(options->driver);
BlockDriver *drv = bdrv_find_format(fmt);
+ if (!drv) {
+ error_setg(errp, "Block driver '%s' not found or not supported", fmt);
+ return;
+ }
+
/* If the driver is in the schema, we know that it exists. But it may not
* be whitelisted. */
- assert(drv);
if (bdrv_uses_whitelist() && !bdrv_is_whitelisted(drv, false)) {
error_setg(errp, "Driver is not whitelisted");
return;

View File

@ -0,0 +1,163 @@
From: Max Reitz <mreitz@redhat.com>
Date: Fri, 23 Aug 2019 15:03:40 +0200
Subject: block/file-posix: Reduce xfsctl() use
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: b2c6f23f4a9f6d8f1b648705cd46d3713b78d6a2
This patch removes xfs_write_zeroes() and xfs_discard(). Both functions
have been added just before the same feature was present through
fallocate():
- fallocate() has supported PUNCH_HOLE for XFS since Linux 2.6.38 (March
2011); xfs_discard() was added in December 2010.
- fallocate() has supported ZERO_RANGE for XFS since Linux 3.15 (June
2014); xfs_write_zeroes() was added in November 2013.
Nowadays, all systems that qemu runs on should support both fallocate()
features (RHEL 7's kernel does).
xfsctl() is still useful for getting the request alignment for O_DIRECT,
so this patch does not remove our dependency on it completely.
Note that xfs_write_zeroes() had a bug: It calls ftruncate() when the
file is shorter than the specified range (because ZERO_RANGE does not
increase the file length). ftruncate() may yield and then discard data
that parallel write requests have written past the EOF in the meantime.
Dropping the function altogether fixes the bug.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Fixes: 50ba5b2d994853b38fed10e0841b119da0f8b8e5
Reported-by: Lukáš Doktor <ldoktor@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: Stefano Garzarella <sgarzare@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Tested-by: Stefano Garzarella <sgarzare@redhat.com>
Tested-by: John Snow <jsnow@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/file-posix.c | 77 +---------------------------------------------
1 file changed, 1 insertion(+), 76 deletions(-)
diff --git a/block/file-posix.c b/block/file-posix.c
index 4479cc7ab467f217cff8b3efbd1f..992eb4a798b99fe02e93103028c6 100644
--- a/block/file-posix.c
+++ b/block/file-posix.c
@@ -1445,59 +1445,6 @@ out:
}
}
-#ifdef CONFIG_XFS
-static int xfs_write_zeroes(BDRVRawState *s, int64_t offset, uint64_t bytes)
-{
- int64_t len;
- struct xfs_flock64 fl;
- int err;
-
- len = lseek(s->fd, 0, SEEK_END);
- if (len < 0) {
- return -errno;
- }
-
- if (offset + bytes > len) {
- /* XFS_IOC_ZERO_RANGE does not increase the file length */
- if (ftruncate(s->fd, offset + bytes) < 0) {
- return -errno;
- }
- }
-
- memset(&fl, 0, sizeof(fl));
- fl.l_whence = SEEK_SET;
- fl.l_start = offset;
- fl.l_len = bytes;
-
- if (xfsctl(NULL, s->fd, XFS_IOC_ZERO_RANGE, &fl) < 0) {
- err = errno;
- trace_file_xfs_write_zeroes(strerror(errno));
- return -err;
- }
-
- return 0;
-}
-
-static int xfs_discard(BDRVRawState *s, int64_t offset, uint64_t bytes)
-{
- struct xfs_flock64 fl;
- int err;
-
- memset(&fl, 0, sizeof(fl));
- fl.l_whence = SEEK_SET;
- fl.l_start = offset;
- fl.l_len = bytes;
-
- if (xfsctl(NULL, s->fd, XFS_IOC_UNRESVSP64, &fl) < 0) {
- err = errno;
- trace_file_xfs_discard(strerror(errno));
- return -err;
- }
-
- return 0;
-}
-#endif
-
static int translate_err(int err)
{
if (err == -ENODEV || err == -ENOSYS || err == -EOPNOTSUPP ||
@@ -1553,10 +1500,8 @@ static ssize_t handle_aiocb_write_zeroes_block(RawPosixAIOData *aiocb)
static int handle_aiocb_write_zeroes(void *opaque)
{
RawPosixAIOData *aiocb = opaque;
-#if defined(CONFIG_FALLOCATE) || defined(CONFIG_XFS)
- BDRVRawState *s = aiocb->bs->opaque;
-#endif
#ifdef CONFIG_FALLOCATE
+ BDRVRawState *s = aiocb->bs->opaque;
int64_t len;
#endif
@@ -1564,12 +1509,6 @@ static int handle_aiocb_write_zeroes(void *opaque)
return handle_aiocb_write_zeroes_block(aiocb);
}
-#ifdef CONFIG_XFS
- if (s->is_xfs) {
- return xfs_write_zeroes(s, aiocb->aio_offset, aiocb->aio_nbytes);
- }
-#endif
-
#ifdef CONFIG_FALLOCATE_ZERO_RANGE
if (s->has_write_zeroes) {
int ret = do_fallocate(s->fd, FALLOC_FL_ZERO_RANGE,
@@ -1632,14 +1571,6 @@ static int handle_aiocb_write_zeroes_unmap(void *opaque)
}
#endif
-#ifdef CONFIG_XFS
- if (s->is_xfs) {
- /* xfs_discard() guarantees that the discarded area reads as all-zero
- * afterwards, so we can use it here. */
- return xfs_discard(s, aiocb->aio_offset, aiocb->aio_nbytes);
- }
-#endif
-
/* If we couldn't manage to unmap while guaranteed that the area reads as
* all-zero afterwards, just write zeroes without unmapping */
ret = handle_aiocb_write_zeroes(aiocb);
@@ -1716,12 +1647,6 @@ static int handle_aiocb_discard(void *opaque)
ret = -errno;
#endif
} else {
-#ifdef CONFIG_XFS
- if (s->is_xfs) {
- return xfs_discard(s, aiocb->aio_offset, aiocb->aio_nbytes);
- }
-#endif
-
#ifdef CONFIG_FALLOCATE_PUNCH_HOLE
ret = do_fallocate(s->fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
aiocb->aio_offset, aiocb->aio_nbytes);

View File

@ -0,0 +1,39 @@
From: Peter Lieven <pl@kamp.de>
Date: Tue, 10 Sep 2019 17:41:09 +0200
Subject: block/nfs: tear down aio before nfs_close
Git-commit: 601dc6559725f7a614b6f893611e17ff0908e914
nfs_close is a sync call from libnfs and has its own event
handler polling on the nfs FD. Avoid that both QEMU and libnfs
are intefering here.
CC: qemu-stable@nongnu.org
Signed-off-by: Peter Lieven <pl@kamp.de>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/nfs.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/block/nfs.c b/block/nfs.c
index d93241b3bb84cf0a662f0ddec582..2b7a0782419af82aea80dd76e474 100644
--- a/block/nfs.c
+++ b/block/nfs.c
@@ -390,12 +390,14 @@ static void nfs_attach_aio_context(BlockDriverState *bs,
static void nfs_client_close(NFSClient *client)
{
if (client->context) {
+ qemu_mutex_lock(&client->mutex);
+ aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
+ false, NULL, NULL, NULL, NULL);
+ qemu_mutex_unlock(&client->mutex);
if (client->fh) {
nfs_close(client->context, client->fh);
client->fh = NULL;
}
- aio_set_fd_handler(client->aio_context, nfs_get_fd(client->context),
- false, NULL, NULL, NULL, NULL);
nfs_destroy_context(client->context);
client->context = NULL;
}

View File

@ -0,0 +1,59 @@
From: Sergio Lopez <slp@redhat.com>
Date: Wed, 11 Sep 2019 12:03:16 +0200
Subject: blockjob: update nodes head while removing all bdrv
Git-commit: d876bf676f5e7c6aa9ac64555e48cba8734ecb2f
block_job_remove_all_bdrv() iterates through job->nodes, calling
bdrv_root_unref_child() for each entry. The call to the latter may
reach child_job_[can_]set_aio_ctx(), which will also attempt to
traverse job->nodes, potentially finding entries that where freed
on previous iterations.
To avoid this situation, update job->nodes head on each iteration to
ensure that already freed entries are no longer linked to the list.
RHBZ: https://bugzilla.redhat.com/show_bug.cgi?id=1746631
Signed-off-by: Sergio Lopez <slp@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190911100316.32282-1-mreitz@redhat.com
Reviewed-by: Sergio Lopez <slp@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
blockjob.c | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/blockjob.c b/blockjob.c
index 20b7f557da3e491927b99b113b73..74abb97bfdf27b5a9f4f82cd55b4 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -186,14 +186,23 @@ static const BdrvChildRole child_job = {
void block_job_remove_all_bdrv(BlockJob *job)
{
- GSList *l;
- for (l = job->nodes; l; l = l->next) {
+ /*
+ * bdrv_root_unref_child() may reach child_job_[can_]set_aio_ctx(),
+ * which will also traverse job->nodes, so consume the list one by
+ * one to make sure that such a concurrent access does not attempt
+ * to process an already freed BdrvChild.
+ */
+ while (job->nodes) {
+ GSList *l = job->nodes;
BdrvChild *c = l->data;
+
+ job->nodes = l->next;
+
bdrv_op_unblock_all(c->bs, job->blocker);
bdrv_root_unref_child(c);
+
+ g_slist_free_1(l);
}
- g_slist_free(job->nodes);
- job->nodes = NULL;
}
bool block_job_has_bdrv(BlockJob *job, BlockDriverState *bs)

View File

@ -1,3 +1,3 @@
version https://git-lfs.github.com/spec/v1
oid sha256:2aa664ee29f9254cf714362c9239c1b5b549f0d2d836cf30c25a9ed0962da796
size 37808
oid sha256:956551eb5fd32778ff718dd66924f10f4dc2ac7c46466065a4724d4a1f1edd05
size 52432

View File

@ -0,0 +1,71 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:32 +0200
Subject: curl: Check completion in curl_multi_do()
Git-commit: 948403bcb1c7e71dcbe8ab8479cf3934a0efcbb5
While it is more likely that transfers complete after some file
descriptor has data ready to read, we probably should not rely on it.
Better be safe than sorry and call curl_multi_check_completion() in
curl_multi_do(), too, just like it is done in curl_multi_read().
With this change, curl_multi_do() and curl_multi_read() are actually the
same, so drop curl_multi_read() and use curl_multi_do() as the sole FD
handler.
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-4-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 14 ++------------
1 file changed, 2 insertions(+), 12 deletions(-)
diff --git a/block/curl.c b/block/curl.c
index 95d7b77dc0b1cf25443effdb9eb3..5838afef99e070d8e7b704fa55e7 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -139,7 +139,6 @@ typedef struct BDRVCURLState {
static void curl_clean_state(CURLState *s);
static void curl_multi_do(void *arg);
-static void curl_multi_read(void *arg);
#ifdef NEED_CURL_TIMER_CALLBACK
/* Called from curl_multi_do_locked, with s->mutex held. */
@@ -186,7 +185,7 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, false,
- curl_multi_read, NULL, NULL, state);
+ curl_multi_do, NULL, NULL, state);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, false,
@@ -194,7 +193,7 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, false,
- curl_multi_read, curl_multi_do, NULL, state);
+ curl_multi_do, curl_multi_do, NULL, state);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, false,
@@ -416,15 +415,6 @@ static void curl_multi_do(void *arg)
{
CURLState *s = (CURLState *)arg;
- qemu_mutex_lock(&s->s->mutex);
- curl_multi_do_locked(s);
- qemu_mutex_unlock(&s->s->mutex);
-}
-
-static void curl_multi_read(void *arg)
-{
- CURLState *s = (CURLState *)arg;
-
qemu_mutex_lock(&s->s->mutex);
curl_multi_do_locked(s);
curl_multi_check_completion(s->s);

View File

@ -0,0 +1,146 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:35 +0200
Subject: curl: Handle success in multi_check_completion
Git-commit: bfb23b480a49114315877aacf700b49453e0f9d9
Background: As of cURL 7.59.0, it verifies that several functions are
not called from within a callback. Among these functions is
curl_multi_add_handle().
curl_read_cb() is a callback from cURL and not a coroutine. Waking up
acb->co will lead to entering it then and there, which means the current
request will settle and the caller (if it runs in the same coroutine)
may then issue the next request. In such a case, we will enter
curl_setup_preadv() effectively from within curl_read_cb().
Calling curl_multi_add_handle() will then fail and the new request will
not be processed.
Fix this by not letting curl_read_cb() wake up acb->co. Instead, leave
the whole business of settling the AIOCB objects to
curl_multi_check_completion() (which is called from our timer callback
and our FD handler, so not from any cURL callbacks).
Reported-by: Natalie Gavrielov <ngavrilo@redhat.com>
Buglink: https://bugzilla.redhat.com/show_bug.cgi?id=1740193
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-7-mreitz@redhat.com
Reviewed-by: John Snow <jsnow@redhat.com>
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 69 ++++++++++++++++++++++------------------------------
1 file changed, 29 insertions(+), 40 deletions(-)
diff --git a/block/curl.c b/block/curl.c
index fd70f1ebc458f22f6d1a4bc01e1e..c343c7ed3ddad205051d7e3b0196 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -229,7 +229,6 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
{
CURLState *s = ((CURLState*)opaque);
size_t realsize = size * nmemb;
- int i;
trace_curl_read_cb(realsize);
@@ -245,32 +244,6 @@ static size_t curl_read_cb(void *ptr, size_t size, size_t nmemb, void *opaque)
memcpy(s->orig_buf + s->buf_off, ptr, realsize);
s->buf_off += realsize;
- for(i=0; i<CURL_NUM_ACB; i++) {
- CURLAIOCB *acb = s->acb[i];
-
- if (!acb)
- continue;
-
- if ((s->buf_off >= acb->end)) {
- size_t request_length = acb->bytes;
-
- qemu_iovec_from_buf(acb->qiov, 0, s->orig_buf + acb->start,
- acb->end - acb->start);
-
- if (acb->end - acb->start < request_length) {
- size_t offset = acb->end - acb->start;
- qemu_iovec_memset(acb->qiov, offset, 0,
- request_length - offset);
- }
-
- acb->ret = 0;
- s->acb[i] = NULL;
- qemu_mutex_unlock(&s->s->mutex);
- aio_co_wake(acb->co);
- qemu_mutex_lock(&s->s->mutex);
- }
- }
-
read_end:
/* curl will error out if we do not return this value */
return size * nmemb;
@@ -351,13 +324,14 @@ static void curl_multi_check_completion(BDRVCURLState *s)
break;
if (msg->msg == CURLMSG_DONE) {
+ int i;
CURLState *state = NULL;
+ bool error = msg->data.result != CURLE_OK;
+
curl_easy_getinfo(msg->easy_handle, CURLINFO_PRIVATE,
(char **)&state);
- /* ACBs for successful messages get completed in curl_read_cb */
- if (msg->data.result != CURLE_OK) {
- int i;
+ if (error) {
static int errcount = 100;
/* Don't lose the original error message from curl, since
@@ -369,20 +343,35 @@ static void curl_multi_check_completion(BDRVCURLState *s)
error_report("curl: further errors suppressed");
}
}
+ }
- for (i = 0; i < CURL_NUM_ACB; i++) {
- CURLAIOCB *acb = state->acb[i];
+ for (i = 0; i < CURL_NUM_ACB; i++) {
+ CURLAIOCB *acb = state->acb[i];
- if (acb == NULL) {
- continue;
- }
+ if (acb == NULL) {
+ continue;
+ }
+
+ if (!error) {
+ /* Assert that we have read all data */
+ assert(state->buf_off >= acb->end);
+
+ qemu_iovec_from_buf(acb->qiov, 0,
+ state->orig_buf + acb->start,
+ acb->end - acb->start);
- acb->ret = -EIO;
- state->acb[i] = NULL;
- qemu_mutex_unlock(&s->mutex);
- aio_co_wake(acb->co);
- qemu_mutex_lock(&s->mutex);
+ if (acb->end - acb->start < acb->bytes) {
+ size_t offset = acb->end - acb->start;
+ qemu_iovec_memset(acb->qiov, offset, 0,
+ acb->bytes - offset);
+ }
}
+
+ acb->ret = error ? -EIO : 0;
+ state->acb[i] = NULL;
+ qemu_mutex_unlock(&s->mutex);
+ aio_co_wake(acb->co);
+ qemu_mutex_lock(&s->mutex);
}
curl_clean_state(state);

View File

@ -0,0 +1,49 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:30 +0200
Subject: curl: Keep pointer to the CURLState in CURLSocket
Git-commit: 0487861685294660b23bc146e1ebd5304aa8bbe0
A follow-up patch will make curl_multi_do() and curl_multi_read() take a
CURLSocket instead of the CURLState. They still need the latter,
though, so add a pointer to it to the former.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Message-id: 20190910124136.10565-2-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/block/curl.c b/block/curl.c
index d4c8e94f3e0fe26ee221e763356e..92dc2f630e20f4a6b138c9c82b8b 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -80,6 +80,7 @@ static CURLMcode __curl_multi_socket_action(CURLM *multi_handle,
#define CURL_BLOCK_OPT_TIMEOUT_DEFAULT 5
struct BDRVCURLState;
+struct CURLState;
static bool libcurl_initialized;
@@ -97,6 +98,7 @@ typedef struct CURLAIOCB {
typedef struct CURLSocket {
int fd;
+ struct CURLState *state;
QLIST_ENTRY(CURLSocket) next;
} CURLSocket;
@@ -180,6 +182,7 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
if (!socket) {
socket = g_new0(CURLSocket, 1);
socket->fd = fd;
+ socket->state = state;
QLIST_INSERT_HEAD(&state->sockets, socket, next);
}
socket = NULL;

View File

@ -0,0 +1,56 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:31 +0200
Subject: curl: Keep *socket until the end of curl_sock_cb()
Git-commit: 007f339b1099af46a008dac438ca0943e31dba72
This does not really change anything, but it makes the code a bit easier
to follow once we use @socket as the opaque pointer for
aio_set_fd_handler().
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-3-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/block/curl.c b/block/curl.c
index 92dc2f630e20f4a6b138c9c82b8b..95d7b77dc0b1cf25443effdb9eb3 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -172,10 +172,6 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
QLIST_FOREACH(socket, &state->sockets, next) {
if (socket->fd == fd) {
- if (action == CURL_POLL_REMOVE) {
- QLIST_REMOVE(socket, next);
- g_free(socket);
- }
break;
}
}
@@ -185,7 +181,6 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
socket->state = state;
QLIST_INSERT_HEAD(&state->sockets, socket, next);
}
- socket = NULL;
trace_curl_sock_cb(action, (int)fd);
switch (action) {
@@ -207,6 +202,11 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
break;
}
+ if (action == CURL_POLL_REMOVE) {
+ QLIST_REMOVE(socket, next);
+ g_free(socket);
+ }
+
return 0;
}

View File

@ -0,0 +1,77 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:33 +0200
Subject: curl: Pass CURLSocket to curl_multi_do()
Git-commit: 9dbad87d25587ff640ef878f7b6159fc368ff541
curl_multi_do_locked() currently marks all sockets as ready. That is
not only inefficient, but in fact unsafe (the loop is). A follow-up
patch will change that, but to do so, curl_multi_do_locked() needs to
know exactly which socket is ready; and that is accomplished by this
patch here.
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-5-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 20 +++++++++++---------
1 file changed, 11 insertions(+), 9 deletions(-)
diff --git a/block/curl.c b/block/curl.c
index 5838afef99e070d8e7b704fa55e7..cf2686218dcf4bc7d2db1a7026f9 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -185,15 +185,15 @@ static int curl_sock_cb(CURL *curl, curl_socket_t fd, int action,
switch (action) {
case CURL_POLL_IN:
aio_set_fd_handler(s->aio_context, fd, false,
- curl_multi_do, NULL, NULL, state);
+ curl_multi_do, NULL, NULL, socket);
break;
case CURL_POLL_OUT:
aio_set_fd_handler(s->aio_context, fd, false,
- NULL, curl_multi_do, NULL, state);
+ NULL, curl_multi_do, NULL, socket);
break;
case CURL_POLL_INOUT:
aio_set_fd_handler(s->aio_context, fd, false,
- curl_multi_do, curl_multi_do, NULL, state);
+ curl_multi_do, curl_multi_do, NULL, socket);
break;
case CURL_POLL_REMOVE:
aio_set_fd_handler(s->aio_context, fd, false,
@@ -392,9 +392,10 @@ static void curl_multi_check_completion(BDRVCURLState *s)
}
/* Called with s->mutex held. */
-static void curl_multi_do_locked(CURLState *s)
+static void curl_multi_do_locked(CURLSocket *ready_socket)
{
CURLSocket *socket, *next_socket;
+ CURLState *s = ready_socket->state;
int running;
int r;
@@ -413,12 +414,13 @@ static void curl_multi_do_locked(CURLState *s)
static void curl_multi_do(void *arg)
{
- CURLState *s = (CURLState *)arg;
+ CURLSocket *socket = arg;
+ BDRVCURLState *s = socket->state->s;
- qemu_mutex_lock(&s->s->mutex);
- curl_multi_do_locked(s);
- curl_multi_check_completion(s->s);
- qemu_mutex_unlock(&s->s->mutex);
+ qemu_mutex_lock(&s->mutex);
+ curl_multi_do_locked(socket);
+ curl_multi_check_completion(s);
+ qemu_mutex_unlock(&s->mutex);
}
static void curl_multi_timeout_do(void *arg)

View File

@ -0,0 +1,61 @@
From: Max Reitz <mreitz@redhat.com>
Date: Tue, 10 Sep 2019 14:41:34 +0200
Subject: curl: Report only ready sockets
Git-commit: 9abaf9fc474c3dd53e8e119326abc774c977c331
Instead of reporting all sockets to cURL, only report the one that has
caused curl_multi_do_locked() to be called. This lets us get rid of the
QLIST_FOREACH_SAFE() list, which was actually wrong: SAFE foreaches are
only safe when the current element is removed in each iteration. If it
possible for the list to be concurrently modified, we cannot guarantee
that only the current element will be removed. Therefore, we must not
use QLIST_FOREACH_SAFE() here.
Fixes: ff5ca1664af85b24a4180d595ea6873fd3deac57
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Message-id: 20190910124136.10565-6-mreitz@redhat.com
Reviewed-by: Maxim Levitsky <mlevitsk@redhat.com>
Reviewed-by: John Snow <jsnow@redhat.com>
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/curl.c | 17 ++++++-----------
1 file changed, 6 insertions(+), 11 deletions(-)
diff --git a/block/curl.c b/block/curl.c
index cf2686218dcf4bc7d2db1a7026f9..fd70f1ebc458f22f6d1a4bc01e1e 100644
--- a/block/curl.c
+++ b/block/curl.c
@@ -392,24 +392,19 @@ static void curl_multi_check_completion(BDRVCURLState *s)
}
/* Called with s->mutex held. */
-static void curl_multi_do_locked(CURLSocket *ready_socket)
+static void curl_multi_do_locked(CURLSocket *socket)
{
- CURLSocket *socket, *next_socket;
- CURLState *s = ready_socket->state;
+ BDRVCURLState *s = socket->state->s;
int running;
int r;
- if (!s->s->multi) {
+ if (!s->multi) {
return;
}
- /* Need to use _SAFE because curl_multi_socket_action() may trigger
- * curl_sock_cb() which might modify this list */
- QLIST_FOREACH_SAFE(socket, &s->sockets, next, next_socket) {
- do {
- r = curl_multi_socket_action(s->s->multi, socket->fd, 0, &running);
- } while (r == CURLM_CALL_MULTI_PERFORM);
- }
+ do {
+ r = curl_multi_socket_action(s->multi, socket->fd, 0, &running);
+ } while (r == CURLM_CALL_MULTI_PERFORM);
}
static void curl_multi_do(void *arg)

View File

@ -0,0 +1,45 @@
From: Peter Maydell <peter.maydell@linaro.org>
Date: Fri, 20 Sep 2019 18:40:39 +0100
Subject: hw/arm/boot.c: Set NSACR.{CP11,CP10} for NS kernel boots
Git-commit: ece628fcf69cbbd4b3efb6fbd203af07609467a2
If we're booting a Linux kernel directly into Non-Secure
state on a CPU which has Secure state, then make sure we
set the NSACR CP11 and CP10 bits, so that Non-Secure is allowed
to access the FPU. Otherwise an AArch32 kernel will UNDEF as
soon as it tries to use the FPU.
It used to not matter that we didn't do this until commit
fc1120a7f5f2d4b6, where we implemented actually honouring
these NSACR bits.
The problem only exists for CPUs where EL3 is AArch32; the
equivalent AArch64 trap bits are in CPTR_EL3 and are "0 to
not trap, 1 to trap", so the reset value of the register
permits NS access, unlike NSACR.
Fixes: fc1120a7f5
Fixes: https://bugs.launchpad.net/qemu/+bug/1844597
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Message-id: 20190920174039.3916-1-peter.maydell@linaro.org
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
hw/arm/boot.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/hw/arm/boot.c b/hw/arm/boot.c
index c2b89b3bb9b6b92b0293d859712e..fc4e021a38a6bc1e5e2aa5b5876c 100644
--- a/hw/arm/boot.c
+++ b/hw/arm/boot.c
@@ -754,6 +754,8 @@ static void do_cpu_reset(void *opaque)
(cs != first_cpu || !info->secure_board_setup)) {
/* Linux expects non-secure state */
env->cp15.scr_el3 |= SCR_NS;
+ /* Set NSACR.{CP11,CP10} so NS can access the FPU */
+ env->cp15.nsacr |= 3 << 10;
}
}

View File

@ -0,0 +1,43 @@
From: Thomas Huth <thuth@redhat.com>
Date: Wed, 25 Sep 2019 14:16:43 +0200
Subject: hw/core/loader: Fix possible crash in rom_copy()
Git-commit: e423455c4f23a1a828901c78fe6d03b7dde79319
Both, "rom->addr" and "addr" are derived from the binary image
that can be loaded with the "-kernel" paramer. The code in
rom_copy() then calculates:
d = dest + (rom->addr - addr);
and uses "d" as destination in a memcpy() some lines later. Now with
bad kernel images, it is possible that rom->addr is smaller than addr,
thus "rom->addr - addr" gets negative and the memcpy() then tries to
copy contents from the image to a bad memory location. This could
maybe be used to inject code from a kernel image into the QEMU binary,
so we better fix it with an additional sanity check here.
Cc: qemu-stable@nongnu.org
Reported-by: Guangming Liu
Buglink: https://bugs.launchpad.net/qemu/+bug/1844635
Message-Id: <20190925130331.27825-1-thuth@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
hw/core/loader.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/hw/core/loader.c b/hw/core/loader.c
index 425bf69a9968765b4604a442eb0a..838a34174ac2039d55f557fa427a 100644
--- a/hw/core/loader.c
+++ b/hw/core/loader.c
@@ -1242,7 +1242,7 @@ int rom_copy(uint8_t *dest, hwaddr addr, size_t size)
if (rom->addr + rom->romsize < addr) {
continue;
}
- if (rom->addr > end) {
+ if (rom->addr > end || rom->addr < addr) {
break;
}

View File

@ -0,0 +1,44 @@
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Date: Wed, 14 Aug 2019 18:55:34 +0100
Subject: memory: Provide an equality function for MemoryRegionSections
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: 9366cf02e4e31c2a8128904d4d8290a0fad5f888
Provide a comparison function that checks all the fields are the same.
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-Id: <20190814175535.2023-3-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
include/exec/memory.h | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/include/exec/memory.h b/include/exec/memory.h
index bb0961ddb96788539e7138d4f5b3..25bc7ef1adc04d6de1ce1a41a38a 100644
--- a/include/exec/memory.h
+++ b/include/exec/memory.h
@@ -496,6 +496,18 @@ struct MemoryRegionSection {
bool nonvolatile;
};
+static inline bool MemoryRegionSection_eq(MemoryRegionSection *a,
+ MemoryRegionSection *b)
+{
+ return a->mr == b->mr &&
+ a->fv == b->fv &&
+ a->offset_within_region == b->offset_within_region &&
+ a->offset_within_address_space == b->offset_within_address_space &&
+ int128_eq(a->size, b->size) &&
+ a->readonly == b->readonly &&
+ a->nonvolatile == b->nonvolatile;
+}
+
/**
* memory_region_init: Initialize a memory region
*

View File

@ -0,0 +1,50 @@
From: Kevin Wolf <kwolf@redhat.com>
Date: Mon, 22 Jul 2019 17:44:27 +0200
Subject: mirror: Keep mirror_top_bs drained after dropping permissions
Git-commit: d2da5e288a2e71e82866c8fdefd41b5727300124
mirror_top_bs is currently implicitly drained through its connection to
the source or the target node. However, the drain section for target_bs
ends early after moving mirror_top_bs from src to target_bs, so that
requests can already be restarted while mirror_top_bs is still present
in the chain, but has dropped all permissions and therefore runs into an
assertion failure like this:
qemu-system-x86_64: block/io.c:1634: bdrv_co_write_req_prepare:
Assertion `child->perm & BLK_PERM_WRITE' failed.
Keep mirror_top_bs drained until all graph changes have completed.
Cc: qemu-stable@nongnu.org
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Reviewed-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/mirror.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/mirror.c b/block/mirror.c
index 9f5c59ece1df391babc4461f63cb..642d6570cc97e1239b119a46c457 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -656,7 +656,10 @@ static int mirror_exit_common(Job *job)
s->target = NULL;
/* We don't access the source any more. Dropping any WRITE/RESIZE is
- * required before it could become a backing file of target_bs. */
+ * required before it could become a backing file of target_bs. Not having
+ * these permissions any more means that we can't allow any new requests on
+ * mirror_top_bs from now on, so keep it drained. */
+ bdrv_drained_begin(mirror_top_bs);
bs_opaque->stop = true;
bdrv_child_refresh_perms(mirror_top_bs, mirror_top_bs->backing,
&error_abort);
@@ -724,6 +727,7 @@ static int mirror_exit_common(Job *job)
bs_opaque->job = NULL;
bdrv_drained_end(src);
+ bdrv_drained_end(mirror_top_bs);
s->in_drain = false;
bdrv_unref(mirror_top_bs);
bdrv_unref(src);

View File

@ -0,0 +1,37 @@
From: Markus Armbruster <armbru@redhat.com>
Date: Thu, 22 Aug 2019 15:38:46 +0200
Subject: pr-manager: Fix invalid g_free() crash bug
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: 6b9d62c2a9e83bbad73fb61406f0ff69b46ff6f3
pr_manager_worker() passes its @opaque argument to g_free(). Wrong;
it points to pr_manager_worker()'s automatic @data. Broken when
commit 2f3a7ab39be converted @data from heap- to stack-allocated. Fix
by deleting the g_free().
Fixes: 2f3a7ab39bec4ba8022dc4d42ea641165b004e3e
Cc: qemu-stable@nongnu.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
scsi/pr-manager.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/scsi/pr-manager.c b/scsi/pr-manager.c
index ee43663576ed32c3d27649157e83..0c866e869835930767dacd3a0b21 100644
--- a/scsi/pr-manager.c
+++ b/scsi/pr-manager.c
@@ -39,7 +39,6 @@ static int pr_manager_worker(void *opaque)
int fd = data->fd;
int r;
- g_free(data);
trace_pr_manager_run(fd, hdr->cmdp[0], hdr->cmdp[1]);
/* The reference was taken in pr_manager_execute. */

View File

@ -0,0 +1,56 @@
From: Alberto Garcia <berto@igalia.com>
Date: Fri, 16 Aug 2019 15:17:42 +0300
Subject: qcow2: Fix the calculation of the maximum L2 cache size
Git-commit: b70d08205b2e4044c529eefc21df2c8ab61b473b
The size of the qcow2 L2 cache defaults to 32 MB, which can be easily
larger than the maximum amount of L2 metadata that the image can have.
For example: with 64 KB clusters the user would need a qcow2 image
with a virtual size of 256 GB in order to have 32 MB of L2 metadata.
Because of that, since commit b749562d9822d14ef69c9eaa5f85903010b86c30
we forbid the L2 cache to become larger than the maximum amount of L2
metadata for the image, calculated using this formula:
uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8);
The problem with this formula is that the result should be rounded up
to the cluster size because an L2 table on disk always takes one full
cluster.
For example, a 1280 MB qcow2 image with 64 KB clusters needs exactly
160 KB of L2 metadata, but we need 192 KB on disk (3 clusters) even if
the last 32 KB of those are not going to be used.
However QEMU rounds the numbers down and only creates 2 cache tables
(128 KB), which is not enough for the image.
A quick test doing 4KB random writes on a 1280 MB image gives me
around 500 IOPS, while with the correct cache size I get 16K IOPS.
Cc: qemu-stable@nongnu.org
Signed-off-by: Alberto Garcia <berto@igalia.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/qcow2.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/block/qcow2.c b/block/qcow2.c
index 039bdc2f7e799f935f5364daed5c..865839682cd639d1b7aba0cc328f 100644
--- a/block/qcow2.c
+++ b/block/qcow2.c
@@ -826,7 +826,11 @@ static void read_cache_sizes(BlockDriverState *bs, QemuOpts *opts,
bool l2_cache_entry_size_set;
int min_refcount_cache = MIN_REFCOUNT_CACHE_SIZE * s->cluster_size;
uint64_t virtual_disk_size = bs->total_sectors * BDRV_SECTOR_SIZE;
- uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8);
+ uint64_t max_l2_entries = DIV_ROUND_UP(virtual_disk_size, s->cluster_size);
+ /* An L2 table is always one cluster in size so the max cache size
+ * should be a multiple of the cluster size. */
+ uint64_t max_l2_cache = ROUND_UP(max_l2_entries * sizeof(uint64_t),
+ s->cluster_size);
combined_cache_size_set = qemu_opt_get(opts, QCOW2_OPT_CACHE_SIZE);
l2_cache_size_set = qemu_opt_get(opts, QCOW2_OPT_L2_CACHE_SIZE);

View File

@ -1,3 +1,33 @@
-------------------------------------------------------------------
Tue Oct 1 22:07:37 UTC 2019 - Bruce Rogers <brogers@suse.com>
- Add some post v4.1.0 upstream stable patches
* Patches added:
mirror-Keep-mirror_top_bs-drained-after-.patch
s390x-tcg-Fix-VERIM-with-32-64-bit-eleme.patch
target-alpha-fix-tlb_fill-trap_arg2-valu.patch
target-arm-Free-TCG-temps-in-trans_VMOV_.patch
target-arm-Don-t-abort-on-M-profile-exce.patch
qcow2-Fix-the-calculation-of-the-maximum.patch
block-file-posix-Reduce-xfsctl-use.patch
pr-manager-Fix-invalid-g_free-crash-bug.patch
vpc-Return-0-from-vpc_co_create-on-succe.patch
block-nfs-tear-down-aio-before-nfs_close.patch
block-create-Do-not-abort-if-a-block-dri.patch
curl-Keep-pointer-to-the-CURLState-in-CU.patch
curl-Keep-socket-until-the-end-of-curl_s.patch
curl-Check-completion-in-curl_multi_do.patch
curl-Pass-CURLSocket-to-curl_multi_do.patch
curl-Report-only-ready-sockets.patch
curl-Handle-success-in-multi_check_compl.patch
blockjob-update-nodes-head-while-removin.patch
memory-Provide-an-equality-function-for-.patch
vhost-Fix-memory-region-section-comparis.patch
hw-arm-boot.c-Set-NSACR.-CP11-CP10-for-N.patch
s390-PCI-fix-IOMMU-region-init.patch
hw-core-loader-Fix-possible-crash-in-rom.patch
- Patch queue updated from git://github.com/openSUSE/qemu.git opensuse-4.1
-------------------------------------------------------------------
Wed Sep 11 14:31:26 UTC 2019 - Bruce Rogers <brogers@suse.com>

133
qemu.spec
View File

@ -111,54 +111,78 @@ Source10: supported.arm.txt
Source11: supported.ppc.txt
Source12: supported.x86.txt
Source13: supported.s390.txt
# this is to make lint happy
Source200: qemu-rpmlintrc
%endif # qemu
Source300: update_git.sh
Source301: config.sh
Source200: qemu-rpmlintrc
Source300: bundles.tar.xz
Source301: update_git.sh
Source302: config.sh
Source303: README.PACKAGING
# Upstream First -- https://wiki.qemu.org/Contribute/SubmitAPatch
# This patch queue is auto-generated - see README.PACKAGING for process
# Patches applied in base project:
Patch00000: XXX-dont-dump-core-on-sigabort.patch
Patch00001: qemu-binfmt-conf-Modify-default-path.patch
Patch00002: qemu-cvs-gettimeofday.patch
Patch00003: qemu-cvs-ioctl_debug.patch
Patch00004: qemu-cvs-ioctl_nodirection.patch
Patch00005: linux-user-add-binfmt-wrapper-for-argv-0.patch
Patch00006: PPC-KVM-Disable-mmu-notifier-check.patch
Patch00007: linux-user-binfmt-support-host-binaries.patch
Patch00008: linux-user-Fake-proc-cpuinfo.patch
Patch00009: linux-user-use-target_ulong.patch
Patch00010: Make-char-muxer-more-robust-wrt-small-FI.patch
Patch00011: linux-user-lseek-explicitly-cast-non-set.patch
Patch00012: AIO-Reduce-number-of-threads-for-32bit-h.patch
Patch00013: xen_disk-Add-suse-specific-flush-disable.patch
Patch00014: qemu-bridge-helper-reduce-security-profi.patch
Patch00015: qemu-binfmt-conf-use-qemu-ARCH-binfmt.patch
Patch00016: linux-user-properly-test-for-infinite-ti.patch
Patch00017: roms-Makefile-pass-a-packaging-timestamp.patch
Patch00018: Raise-soft-address-space-limit-to-hard-l.patch
Patch00019: increase-x86_64-physical-bits-to-42.patch
Patch00020: vga-Raise-VRAM-to-16-MiB-for-pc-0.15-and.patch
Patch00021: i8254-Fix-migration-from-SLE11-SP2.patch
Patch00022: acpi_piix4-Fix-migration-from-SLE11-SP2.patch
Patch00023: Switch-order-of-libraries-for-mpath-supp.patch
Patch00024: Make-installed-scripts-explicitly-python.patch
Patch00025: hw-smbios-handle-both-file-formats-regar.patch
Patch00026: xen-add-block-resize-support-for-xen-dis.patch
Patch00027: tests-qemu-iotests-Triple-timeout-of-i-o.patch
Patch00028: tests-block-io-test-130-needs-some-delay.patch
Patch00029: xen-ignore-live-parameter-from-xen-save-.patch
Patch00030: Conditionalize-ui-bitmap-installation-be.patch
Patch00031: tests-change-error-message-in-test-162.patch
Patch00032: hw-usb-hcd-xhci-Fix-GCC-9-build-warning.patch
Patch00033: hw-usb-dev-mtp-Fix-GCC-9-build-warning.patch
Patch00034: hw-intc-exynos4210_gic-provide-more-room.patch
Patch00035: configure-only-populate-roms-if-softmmu.patch
Patch00036: pc-bios-s390-ccw-net-avoid-warning-about.patch
Patch00037: roms-change-cross-compiler-naming-to-be-.patch
Patch00038: roms-Makefile.edk2-don-t-invoke-git-sinc.patch
Patch00000: mirror-Keep-mirror_top_bs-drained-after-.patch
Patch00001: s390x-tcg-Fix-VERIM-with-32-64-bit-eleme.patch
Patch00002: target-alpha-fix-tlb_fill-trap_arg2-valu.patch
Patch00003: target-arm-Free-TCG-temps-in-trans_VMOV_.patch
Patch00004: target-arm-Don-t-abort-on-M-profile-exce.patch
Patch00005: qcow2-Fix-the-calculation-of-the-maximum.patch
Patch00006: block-file-posix-Reduce-xfsctl-use.patch
Patch00007: pr-manager-Fix-invalid-g_free-crash-bug.patch
Patch00008: vpc-Return-0-from-vpc_co_create-on-succe.patch
Patch00009: block-nfs-tear-down-aio-before-nfs_close.patch
Patch00010: block-create-Do-not-abort-if-a-block-dri.patch
Patch00011: curl-Keep-pointer-to-the-CURLState-in-CU.patch
Patch00012: curl-Keep-socket-until-the-end-of-curl_s.patch
Patch00013: curl-Check-completion-in-curl_multi_do.patch
Patch00014: curl-Pass-CURLSocket-to-curl_multi_do.patch
Patch00015: curl-Report-only-ready-sockets.patch
Patch00016: curl-Handle-success-in-multi_check_compl.patch
Patch00017: blockjob-update-nodes-head-while-removin.patch
Patch00018: memory-Provide-an-equality-function-for-.patch
Patch00019: vhost-Fix-memory-region-section-comparis.patch
Patch00020: hw-arm-boot.c-Set-NSACR.-CP11-CP10-for-N.patch
Patch00021: s390-PCI-fix-IOMMU-region-init.patch
Patch00022: hw-core-loader-Fix-possible-crash-in-rom.patch
Patch00023: XXX-dont-dump-core-on-sigabort.patch
Patch00024: qemu-binfmt-conf-Modify-default-path.patch
Patch00025: qemu-cvs-gettimeofday.patch
Patch00026: qemu-cvs-ioctl_debug.patch
Patch00027: qemu-cvs-ioctl_nodirection.patch
Patch00028: linux-user-add-binfmt-wrapper-for-argv-0.patch
Patch00029: PPC-KVM-Disable-mmu-notifier-check.patch
Patch00030: linux-user-binfmt-support-host-binaries.patch
Patch00031: linux-user-Fake-proc-cpuinfo.patch
Patch00032: linux-user-use-target_ulong.patch
Patch00033: Make-char-muxer-more-robust-wrt-small-FI.patch
Patch00034: linux-user-lseek-explicitly-cast-non-set.patch
Patch00035: AIO-Reduce-number-of-threads-for-32bit-h.patch
Patch00036: xen_disk-Add-suse-specific-flush-disable.patch
Patch00037: qemu-bridge-helper-reduce-security-profi.patch
Patch00038: qemu-binfmt-conf-use-qemu-ARCH-binfmt.patch
Patch00039: linux-user-properly-test-for-infinite-ti.patch
Patch00040: roms-Makefile-pass-a-packaging-timestamp.patch
Patch00041: Raise-soft-address-space-limit-to-hard-l.patch
Patch00042: increase-x86_64-physical-bits-to-42.patch
Patch00043: vga-Raise-VRAM-to-16-MiB-for-pc-0.15-and.patch
Patch00044: i8254-Fix-migration-from-SLE11-SP2.patch
Patch00045: acpi_piix4-Fix-migration-from-SLE11-SP2.patch
Patch00046: Switch-order-of-libraries-for-mpath-supp.patch
Patch00047: Make-installed-scripts-explicitly-python.patch
Patch00048: hw-smbios-handle-both-file-formats-regar.patch
Patch00049: xen-add-block-resize-support-for-xen-dis.patch
Patch00050: tests-qemu-iotests-Triple-timeout-of-i-o.patch
Patch00051: tests-block-io-test-130-needs-some-delay.patch
Patch00052: xen-ignore-live-parameter-from-xen-save-.patch
Patch00053: Conditionalize-ui-bitmap-installation-be.patch
Patch00054: tests-change-error-message-in-test-162.patch
Patch00055: hw-usb-hcd-xhci-Fix-GCC-9-build-warning.patch
Patch00056: hw-usb-dev-mtp-Fix-GCC-9-build-warning.patch
Patch00057: hw-intc-exynos4210_gic-provide-more-room.patch
Patch00058: configure-only-populate-roms-if-softmmu.patch
Patch00059: pc-bios-s390-ccw-net-avoid-warning-about.patch
Patch00060: roms-change-cross-compiler-naming-to-be-.patch
Patch00061: roms-Makefile.edk2-don-t-invoke-git-sinc.patch
# Patches applied in roms/seabios/:
Patch01000: seabios-use-python2-explicitly-as-needed.patch
Patch01001: seabios-switch-to-python3-as-needed.patch
@ -911,6 +935,29 @@ This package provides a service file for starting and stopping KSM.
%patch00036 -p1
%patch00037 -p1
%patch00038 -p1
%patch00039 -p1
%patch00040 -p1
%patch00041 -p1
%patch00042 -p1
%patch00043 -p1
%patch00044 -p1
%patch00045 -p1
%patch00046 -p1
%patch00047 -p1
%patch00048 -p1
%patch00049 -p1
%patch00050 -p1
%patch00051 -p1
%patch00052 -p1
%patch00053 -p1
%patch00054 -p1
%patch00055 -p1
%patch00056 -p1
%patch00057 -p1
%patch00058 -p1
%patch00059 -p1
%patch00060 -p1
%patch00061 -p1
%patch01000 -p1
%patch01001 -p1
%patch01002 -p1

View File

@ -109,11 +109,12 @@ Source10: supported.arm.txt
Source11: supported.ppc.txt
Source12: supported.x86.txt
Source13: supported.s390.txt
# this is to make lint happy
Source200: qemu-rpmlintrc
%endif # qemu
Source300: update_git.sh
Source301: config.sh
Source200: qemu-rpmlintrc
Source300: bundles.tar.xz
Source301: update_git.sh
Source302: config.sh
Source303: README.PACKAGING
# Upstream First -- https://wiki.qemu.org/Contribute/SubmitAPatch
# This patch queue is auto-generated - see README.PACKAGING for process

View File

@ -0,0 +1,48 @@
From: Matthew Rosato <mjrosato@linux.ibm.com>
Date: Thu, 26 Sep 2019 10:10:36 -0400
Subject: s390: PCI: fix IOMMU region init
Git-commit: 7df1dac5f1c85312474df9cb3a8fcae72303da62
The fix in dbe9cf606c shrinks the IOMMU memory region to a size
that seems reasonable on the surface, however is actually too
small as it is based against a 0-mapped address space. This
causes breakage with small guests as they can overrun the IOMMU window.
Let's go back to the prior method of initializing iommu for now.
Fixes: dbe9cf606c ("s390x/pci: Set the iommu region size mpcifc request")
Cc: qemu-stable@nongnu.org
Reviewed-by: Pierre Morel <pmorel@linux.ibm.com>
Reported-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Tested-by: Boris Fiuczynski <fiuczy@linux.ibm.com>
Reported-by: Stefan Zimmerman <stzi@linux.ibm.com>
Signed-off-by: Matthew Rosato <mjrosato@linux.ibm.com>
Message-Id: <1569507036-15314-1-git-send-email-mjrosato@linux.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
hw/s390x/s390-pci-bus.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/hw/s390x/s390-pci-bus.c b/hw/s390x/s390-pci-bus.c
index 2c6e084e2c2636b55980799b5837..9a935f22b5b06a67c8fbd7b6abb6 100644
--- a/hw/s390x/s390-pci-bus.c
+++ b/hw/s390x/s390-pci-bus.c
@@ -694,10 +694,15 @@ static const MemoryRegionOps s390_msi_ctrl_ops = {
void s390_pci_iommu_enable(S390PCIIOMMU *iommu)
{
+ /*
+ * The iommu region is initialized against a 0-mapped address space,
+ * so the smallest IOMMU region we can define runs from 0 to the end
+ * of the PCI address space.
+ */
char *name = g_strdup_printf("iommu-s390-%04x", iommu->pbdev->uid);
memory_region_init_iommu(&iommu->iommu_mr, sizeof(iommu->iommu_mr),
TYPE_S390_IOMMU_MEMORY_REGION, OBJECT(&iommu->mr),
- name, iommu->pal - iommu->pba + 1);
+ name, iommu->pal + 1);
iommu->enabled = true;
memory_region_add_subregion(&iommu->mr, 0, MEMORY_REGION(&iommu->iommu_mr));
g_free(name);

View File

@ -0,0 +1,34 @@
From: David Hildenbrand <david@redhat.com>
Date: Wed, 14 Aug 2019 17:12:42 +0200
Subject: s390x/tcg: Fix VERIM with 32/64 bit elements
Git-commit: 25bcb45d1b81d22634daa2b1a2d8bee746ac129b
Wrong order of operands. The constant always comes last. Makes QEMU crash
reliably on specific git fetch invocations.
Reported-by: Stefano Brivio <sbrivio@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Message-Id: <20190814151242.27199-1-david@redhat.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Fixes: 5c4b0ab460ef ("s390x/tcg: Implement VECTOR ELEMENT ROTATE AND INSERT UNDER MASK")
Cc: qemu-stable@nongnu.org
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
target/s390x/translate_vx.inc.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/target/s390x/translate_vx.inc.c b/target/s390x/translate_vx.inc.c
index 41d5cf869f94ef4c5842582bf830..0caddb3958cdbc820c0d4f1a074b 100644
--- a/target/s390x/translate_vx.inc.c
+++ b/target/s390x/translate_vx.inc.c
@@ -213,7 +213,7 @@ static void get_vec_element_ptr_i64(TCGv_ptr ptr, uint8_t reg, TCGv_i64 enr,
vec_full_reg_offset(v3), ptr, 16, 16, data, fn)
#define gen_gvec_3i(v1, v2, v3, c, gen) \
tcg_gen_gvec_3i(vec_full_reg_offset(v1), vec_full_reg_offset(v2), \
- vec_full_reg_offset(v3), c, 16, 16, gen)
+ vec_full_reg_offset(v3), 16, 16, c, gen)
#define gen_gvec_4(v1, v2, v3, v4, gen) \
tcg_gen_gvec_4(vec_full_reg_offset(v1), vec_full_reg_offset(v2), \
vec_full_reg_offset(v3), vec_full_reg_offset(v4), \

View File

@ -0,0 +1,41 @@
From: Aurelien Jarno <aurelien@aurel32.net>
Date: Thu, 22 Aug 2019 10:45:14 -0700
Subject: target/alpha: fix tlb_fill trap_arg2 value for instruction fetch
Git-commit: cb1de55a83eaca9ee32be9c959dca99e11f2fea8
Commit e41c94529740cc26 ("target/alpha: Convert to CPUClass::tlb_fill")
slightly changed the way the trap_arg2 value is computed in case of TLB
fill. The type of the variable used in the ternary operator has been
changed from an int to an enum. This causes the -1 value to not be
sign-extended to 64-bit in case of an instruction fetch. The trap_arg2
ends up with 0xffffffff instead of 0xffffffffffffffff. Fix that by
changing the -1 into -1LL.
This fixes the execution of user space processes in qemu-system-alpha.
Fixes: e41c94529740cc26
Cc: qemu-stable@nongnu.org
Signed-off-by: Aurelien Jarno <aurelien@aurel32.net>
[rth: Test MMU_DATA_LOAD and MMU_DATA_STORE instead of implying them.]
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
target/alpha/helper.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/target/alpha/helper.c b/target/alpha/helper.c
index 93b8e788b185f8b199b71256e5ff..d0cc6231925c932c192640632658 100644
--- a/target/alpha/helper.c
+++ b/target/alpha/helper.c
@@ -283,7 +283,9 @@ bool alpha_cpu_tlb_fill(CPUState *cs, vaddr addr, int size,
cs->exception_index = EXCP_MMFAULT;
env->trap_arg0 = addr;
env->trap_arg1 = fail;
- env->trap_arg2 = (access_type == MMU_INST_FETCH ? -1 : access_type);
+ env->trap_arg2 = (access_type == MMU_DATA_LOAD ? 0ull :
+ access_type == MMU_DATA_STORE ? 1ull :
+ /* access_type == MMU_INST_FETCH */ -1ull);
cpu_loop_exit_restore(cs, retaddr);
}

View File

@ -0,0 +1,101 @@
From: Peter Maydell <peter.maydell@linaro.org>
Date: Thu, 22 Aug 2019 14:15:34 +0100
Subject: target/arm: Don't abort on M-profile exception return in linux-user
mode
Git-commit: 5e5584c89f36b302c666bc6db535fd3f7ff35ad2
An attempt to do an exception-return (branch to one of the magic
addresses) in linux-user mode for M-profile should behave like
a normal branch, because linux-user mode is always going to be
in 'handler' mode. This used to work, but we broke it when we added
support for the M-profile security extension in commit d02a8698d7ae2bfed.
In that commit we allowed even handler-mode calls to magic return
values to be checked for and dealt with by causing an
EXCP_EXCEPTION_EXIT exception to be taken, because this is
needed for the FNC_RETURN return-from-non-secure-function-call
handling. For system mode we added a check in do_v7m_exception_exit()
to make any spurious calls from Handler mode behave correctly, but
forgot that linux-user mode would also be affected.
How an attempted return-from-non-secure-function-call in linux-user
mode should be handled is not clear -- on real hardware it would
result in return to secure code (not to the Linux kernel) which
could then handle the error in any way it chose. For QEMU we take
the simple approach of treating this erroneous return the same way
it would be handled on a CPU without the security extensions --
treat it as a normal branch.
The upshot of all this is that for linux-user mode we should never
do any of the bx_excret magic, so the code change is simple.
This ought to be a weird corner case that only affects broken guest
code (because Linux user processes should never be attempting to do
exception returns or NS function returns), except that the code that
assigns addresses in RAM for the process and stack in our linux-user
code does not attempt to avoid this magic address range, so
legitimate code attempting to return to a trampoline routine on the
stack can fall into this case. This change fixes those programs,
but we should also look at restricting the range of memory we
use for M-profile linux-user guests to the area that would be
real RAM in hardware.
Cc: qemu-stable@nongnu.org
Reported-by: Christophe Lyon <christophe.lyon@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-id: 20190822131534.16602-1-peter.maydell@linaro.org
Fixes: https://bugs.launchpad.net/qemu/+bug/1840922
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
target/arm/translate.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)
diff --git a/target/arm/translate.c b/target/arm/translate.c
index 7853462b21b870fdc3e3d2166a3e..24cb4ba075d095e050b193570ad2 100644
--- a/target/arm/translate.c
+++ b/target/arm/translate.c
@@ -952,10 +952,27 @@ static inline void gen_bx(DisasContext *s, TCGv_i32 var)
store_cpu_field(var, thumb);
}
-/* Set PC and Thumb state from var. var is marked as dead.
+/*
+ * Set PC and Thumb state from var. var is marked as dead.
* For M-profile CPUs, include logic to detect exception-return
* branches and handle them. This is needed for Thumb POP/LDM to PC, LDR to PC,
* and BX reg, and no others, and happens only for code in Handler mode.
+ * The Security Extension also requires us to check for the FNC_RETURN
+ * which signals a function return from non-secure state; this can happen
+ * in both Handler and Thread mode.
+ * To avoid having to do multiple comparisons in inline generated code,
+ * we make the check we do here loose, so it will match for EXC_RETURN
+ * in Thread mode. For system emulation do_v7m_exception_exit() checks
+ * for these spurious cases and returns without doing anything (giving
+ * the same behaviour as for a branch to a non-magic address).
+ *
+ * In linux-user mode it is unclear what the right behaviour for an
+ * attempted FNC_RETURN should be, because in real hardware this will go
+ * directly to Secure code (ie not the Linux kernel) which will then treat
+ * the error in any way it chooses. For QEMU we opt to make the FNC_RETURN
+ * attempt behave the way it would on a CPU without the security extension,
+ * which is to say "like a normal branch". That means we can simply treat
+ * all branches as normal with no magic address behaviour.
*/
static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
{
@@ -963,10 +980,12 @@ static inline void gen_bx_excret(DisasContext *s, TCGv_i32 var)
* s->base.is_jmp that we need to do the rest of the work later.
*/
gen_bx(s, var);
+#ifndef CONFIG_USER_ONLY
if (arm_dc_feature(s, ARM_FEATURE_M_SECURITY) ||
(s->v7m_handler_mode && arm_dc_feature(s, ARM_FEATURE_M))) {
s->base.is_jmp = DISAS_BX_EXCRET;
}
+#endif
}
static inline void gen_bx_excret_final_code(DisasContext *s)

View File

@ -0,0 +1,38 @@
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 27 Aug 2019 13:19:31 +0100
Subject: target/arm: Free TCG temps in trans_VMOV_64_sp()
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Git-commit: 342d27581bd3ecdb995e4fc55fcd383cf3242888
The function neon_store_reg32() doesn't free the TCG temp that it
is passed, so the caller must do that. We got this right in most
places but forgot to free the TCG temps in trans_VMOV_64_sp().
Cc: qemu-stable@nongnu.org
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190827121931.26836-1-peter.maydell@linaro.org
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
target/arm/translate-vfp.inc.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/target/arm/translate-vfp.inc.c b/target/arm/translate-vfp.inc.c
index 092eb5ec53d944e078f4449c10f1..ef45cecbeac18edb6dffbcad7980 100644
--- a/target/arm/translate-vfp.inc.c
+++ b/target/arm/translate-vfp.inc.c
@@ -881,8 +881,10 @@ static bool trans_VMOV_64_sp(DisasContext *s, arg_VMOV_64_sp *a)
/* gpreg to fpreg */
tmp = load_reg(s, a->rt);
neon_store_reg32(tmp, a->vm);
+ tcg_temp_free_i32(tmp);
tmp = load_reg(s, a->rt2);
neon_store_reg32(tmp, a->vm + 1);
+ tcg_temp_free_i32(tmp);
}
return true;

View File

@ -50,12 +50,10 @@ initbundle() {
# To alter the content of this tarball, lets use git to track these changes, and produce patches which can be included
# in the package spec file
# The following can get things a bit out of the expected order, but I don't think that is really much of a problem, as long as we're guaranteed to patch the superproject before any of it's submodules
# !!!! actually the simply submodule status --recursive w/out the additional processing we do for the groking of the bundle files
SUBMODULE_COMMIT_IDS=($(git -C ~/git/qemu-opensuse submodule status --recursive|awk '{print $1}'))
SUBMODULE_DIRS=($(git -C ~/git/qemu-opensuse submodule status --recursive|awk '{print $2}'))
SUBMODULE_COMMIT_IDS=($(git -C ${LOCAL_REPO_MAP[0]} submodule status --recursive|awk '{print $1}'))
SUBMODULE_DIRS=($(git -C ${LOCAL_REPO_MAP[0]} submodule status --recursive|awk '{print $2}'))
SUBMODULE_COUNT=${#SUBMODULE_COMMIT_IDS[@]}
# !!! I should be able to do this with simply math - ie: use (( ... ))
# TODO: do this with simply math - ie: use (( ... ))
if [[ "$REPO_COUNT" != "$(expr $SUBMODULE_COUNT + 1)" ]]; then
echo "ERROR: submodule count doesn't match the REPO_COUNT variable in config.sh file!"
exit
@ -69,7 +67,7 @@ for (( i=0; i <$SUBMODULE_COUNT; i++ )); do
done
# also handle the superproject (I need to make this smarter, or change something - works for tag, but not normal commit:
GIT_UPSTREAM_COMMIT=$(git -C ~/git/qemu-opensuse show-ref -d $GIT_UPSTREAM_COMMIT_ISH|grep -F "^{}"|awk '{print $1}')
GIT_UPSTREAM_COMMIT=$(git -C ${LOCAL_REPO_MAP[0]} show-ref -d $GIT_UPSTREAM_COMMIT_ISH|grep -F "^{}"|awk '{print $1}')
touch $BUNDLE_DIR/$GIT_UPSTREAM_COMMIT.id
# Now go through all the submodule local repos that are present and create a bundle file for the patches found there
@ -100,11 +98,7 @@ rm -rf $GIT_DIR
#==============================================================================
bundle2local() {
rm -rf $GIT_DIR
rm -rf $CMP_DIR
rm -rf $BUNDLE_DIR
rm -f checkpatch.log
mkdir -p $BUNDLE_DIR
tar xJf bundles.tar.xz -C $BUNDLE_DIR
BUNDLE_FILES=$(find $BUNDLE_DIR -printf "%P\n"|grep "bundle$")
@ -137,13 +131,12 @@ for entry in ${BUNDLE_FILES[@]}; do
git -C $LOCAL_REPO remote add bundlerepo $BUNDLE_DIR/$entry
# in next, the head may be FETCH_HEAD or HEAD depending on how we created:
git -C $LOCAL_REPO fetch bundlerepo FETCH_HEAD
#git -C $LOCAL_REPO fetch bundlerepo HEAD
git -C $LOCAL_REPO branch frombundle FETCH_HEAD
git -C $LOCAL_REPO remote remove bundlerepo
done
echo "For each local repo found a branch named frombundle contains the patches from the bundle."
echo "Use this as the starting point for making changes to the $GIT_BRANCH, which gets used as"
echo "the source when updating the bundle stored with the package."
echo "For each local repo found, a branch named frombundle is created containing the"
echo "patches from the bundle. Use this as the starting point for making changes to"
echo "the $GIT_BRANCH, which is used when updating the bundle stored with the package."
rm -rf $BUNDLE_DIR
}
@ -154,7 +147,7 @@ rm -rf $GIT_DIR
rm -rf $CMP_DIR
rm -rf $BUNDLE_DIR
rm -f checkpatch.log
rm -rf checkthese
rm -f checkthese
if [ "$GIT_UPSTREAM_COMMIT_ISH" = "LATEST" ]; then
# This is just a safety valve in case the above gets edited wrong:
@ -192,15 +185,15 @@ if [ "$OLD_SOURCE_VERSION_AND_EXTRA" = "" ]; then
fi
mkdir -p $BUNDLE_DIR
# TODO: (repo file not yet done)
# This tarball has git bundles stored in a directory structure which mimics the
# submodule locations in the containing git repo. Also at that same dir level
# is a file named repo which contains the one line git repo url (with git:// or
# http(s) prefix). The bundles are named as follows:
# "{path/}{git_sha}.{patch_prefix}.{bundle}", where {path/} isn't present for
# "{path/}{git_sha}.{bundle}", where {path/} isn't present for
# the top (qemu) bundle (ie it's for submodules).
tar xJf bundles.tar.xz -C $BUNDLE_DIR
# !!! The following may be overkill, since it seems that find does do a depth first, which is all we need
BUNDLE_FILES=$(find $BUNDLE_DIR -printf "%P\n"|grep "bundle$")
if [ "$GIT_UPSTREAM_COMMIT_ISH" = "LATEST" ]; then
@ -383,7 +376,6 @@ rm -rf $BUNDLE_DIR
if [[ "$NUMBERED_PATCHES" = "0" ]]; then
for i in [0-9]*.patch; do
osc rm --force $i
echo "calling osc rm on $i"
done
# we need to make sure that w/out the numbered prefixes, the patchnames are all unique
mkdir checkdir
@ -401,9 +393,6 @@ rm -rf $BUNDLE_DIR
else
CHECK_DIR=$CMP_DIR
fi
#step 0, and 0.1 are done above - question remains if the numbered case should use check dir
rm -f checkthese
if [ "$FIVE_DIGIT_POTENTIAL" = "0" ]; then
CHECK_PREFIX="0"
else
@ -438,19 +427,16 @@ rm -rf $BUNDLE_DIR
else
NUMBERED_PATCH_RE="^[[:digit:]]{5}-.*[.]patch$"
fi
#NEXT is #2 in lgorithm
for i in *.patch; do
if [[ $i =~ $NUMBERED_PATCH_RE ]]; then
if [[ "$NUMBERED_PATCHES" = "1" ]]; then
osc rm --force $i
echo "calling osc rm on $i"
echo " $i" >> qemu.changes.deleted
let DELETED_COUNT+=1
let TOTAL_COUNT+=1
fi
else
osc rm --force $i
echo "calling osc rm on $i"
echo " $i" >> qemu.changes.deleted
let DELETED_COUNT+=1
let TOTAL_COUNT+=1
@ -459,15 +445,11 @@ rm -rf $BUNDLE_DIR
mv $CHECK_DIR/* .
if [ -e qemu.changes.added ]; then
xargs osc add < qemu.changes.added
echo "calling osc add on:"; cat qemu.changes.added
fi
# NYI do we need this check?
if [ ! -e checkpatch.pl ]; then
if [[ -e checkthese ]]; then
tar Jxf qemu-$SOURCE_VERSION$VERSION_EXTRA.tar.xz \
qemu-$SOURCE_VERSION/scripts/checkpatch.pl --strip-components=2
fi
if [[ -e checkthese ]]; then
for i in $(cat checkthese); do
./checkpatch.pl --no-tree --terse --no-summary --summary-file \
--patch $i >> checkpatch.log || true
@ -609,12 +591,11 @@ osc service localrun format_spec_file
usage() {
echo "Usage:"
echo "git_update.sh: script to manage package maintenance using a git-based"
echo "workflow. Commands are as follows:"
echo " git2pkg (update package spec file and patches from git)"
echo "bash ./git_update.sh <command>: script to manage package maintenance"
echo "using a git-based workflow. Commands are as follows:"
echo " git2pkg (update package spec file and patches from git. Is default)"
echo " pkg2git (update git (frombundle branch) from the package "bundleofbundles")"
echo " refresh (refresh spec file from spec file template and "bundlofbundles")"
echo " (default is git2pkg)"
}
#==============================================================================
@ -643,7 +624,7 @@ case $1 in
echo "SUCCESS"
echo "To modify package patches, use the frombundle branch as the basis for updating"
echo "the $GIT_BRANCH branch with the new patch queue."
echo "Then export the changes back to the package using git2pkg.sh"
echo "Then export the changes back to the package using update_git.sh git2pkg"
;;
refresh )
echo "Updating the spec file and patches from the spec file template and the bundle"

View File

@ -0,0 +1,42 @@
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Date: Wed, 14 Aug 2019 18:55:35 +0100
Subject: vhost: Fix memory region section comparison
Git-commit: 3fc4a64cbaed2ddee4c60ddc06740b320e18ab82
Using memcmp to compare structures wasn't safe,
as I found out on ARM when I was getting falce miscompares.
Use the helper function for comparing the MRSs.
Fixes: ade6d081fc33948e56e6 ("vhost: Regenerate region list from changed sections list")
Cc: qemu-stable@nongnu.org
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
Message-Id: <20190814175535.2023-4-dgilbert@redhat.com>
Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
hw/virtio/vhost.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index bc899fc60e8bad1651340910c1ca..2ef4bc720f04ddadca3305a73df2 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -451,8 +451,13 @@ static void vhost_commit(MemoryListener *listener)
changed = true;
} else {
/* Same size, lets check the contents */
- changed = n_old_sections && memcmp(dev->mem_sections, old_sections,
- n_old_sections * sizeof(old_sections[0])) != 0;
+ for (int i = 0; i < n_old_sections; i++) {
+ if (!MemoryRegionSection_eq(&old_sections[i],
+ &dev->mem_sections[i])) {
+ changed = true;
+ break;
+ }
+ }
}
trace_vhost_commit(dev->started, changed);

View File

@ -0,0 +1,47 @@
From: Max Reitz <mreitz@redhat.com>
Date: Mon, 2 Sep 2019 21:33:16 +0200
Subject: vpc: Return 0 from vpc_co_create() on success
Git-commit: 1a37e3124407b5a145d44478d3ecbdb89c63789f
blockdev_create_run() directly uses .bdrv_co_create()'s return value as
the job's return value. Jobs must return 0 on success, not just any
nonnegative value. Therefore, using blockdev-create for VPC images may
currently fail as the vpc driver may return a positive integer.
Because there is no point in returning a positive integer anywhere in
the block layer (all non-negative integers are generally treated as
complete success), we probably do not want to add more such cases.
Therefore, fix this problem by making the vpc driver always return 0 in
case of success.
Suggested-by: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-stable@nongnu.org
Signed-off-by: Max Reitz <mreitz@redhat.com>
Signed-off-by: Kevin Wolf <kwolf@redhat.com>
Signed-off-by: Bruce Rogers <brogers@suse.com>
---
block/vpc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/block/vpc.c b/block/vpc.c
index d4776ee8a5229ff43e8fb4fb6e0f..3a88e28e2be18553ff50a9b5c070 100644
--- a/block/vpc.c
+++ b/block/vpc.c
@@ -885,6 +885,7 @@ static int create_dynamic_disk(BlockBackend *blk, uint8_t *buf,
goto fail;
}
+ ret = 0;
fail:
return ret;
}
@@ -908,7 +909,7 @@ static int create_fixed_disk(BlockBackend *blk, uint8_t *buf,
return ret;
}
- return ret;
+ return 0;
}
static int calculate_rounded_image_size(BlockdevCreateOptionsVpc *vpc_opts,