xen/559bc87f-x86-hvmloader-avoid-data-corruption-with-xenstore-rw.patch

69 lines
2.3 KiB
Diff
Raw Normal View History

2015-08-26 22:28:15 +00:00
# Commit bbbe7e7157a964c485fb861765be291734676932
# Date 2015-07-07 14:39:27 +0200
# Author Andrew Cooper <andrew.cooper3@citrix.com>
# Committer Jan Beulich <jbeulich@suse.com>
x86/hvmloader: avoid data corruption with xenstore reads/writes
The functions ring_read and ring_write() have logic to try and deal with
partial reads and writes.
However, in all cases where the "while (len)" loop executed twice, data
corruption would occur as the second memcpy() starts from the beginning of
"data" again, rather than from where it got to.
This bug manifested itself as protocol corruption when a reply header crossed
the first wrap of the response ring. However, similar corruption would also
occur if hvmloader observed xenstored performing partial writes of the block
in question, or if hvmloader had to wait for xenstored to make space in either
ring.
Reported-by: Adam Kucia <djexit@o2.pl>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
--- a/tools/firmware/hvmloader/xenbus.c
+++ b/tools/firmware/hvmloader/xenbus.c
@@ -105,7 +105,7 @@ void xenbus_shutdown(void)
/* Helper functions: copy data in and out of the ring */
static void ring_write(const char *data, uint32_t len)
{
- uint32_t part;
+ uint32_t part, done = 0;
ASSERT(len <= XENSTORE_PAYLOAD_MAX);
@@ -122,16 +122,18 @@ static void ring_write(const char *data,
if ( part > len )
part = len;
- memcpy(rings->req + MASK_XENSTORE_IDX(rings->req_prod), data, part);
+ memcpy(rings->req + MASK_XENSTORE_IDX(rings->req_prod),
+ data + done, part);
barrier(); /* = wmb before prod write, rmb before next cons read */
rings->req_prod += part;
len -= part;
+ done += part;
}
}
static void ring_read(char *data, uint32_t len)
{
- uint32_t part;
+ uint32_t part, done = 0;
ASSERT(len <= XENSTORE_PAYLOAD_MAX);
@@ -148,10 +150,12 @@ static void ring_read(char *data, uint32
if ( part > len )
part = len;
- memcpy(data, rings->rsp + MASK_XENSTORE_IDX(rings->rsp_cons), part);
+ memcpy(data + done,
+ rings->rsp + MASK_XENSTORE_IDX(rings->rsp_cons), part);
barrier(); /* = wmb before cons write, rmb before next prod read */
rings->rsp_cons += part;
len -= part;
+ done += part;
}
}