forked from SLFO-pool/xen
37 lines
1.6 KiB
Diff
37 lines
1.6 KiB
Diff
# Commit eb7cd0593d88c4b967a24bca8bd30591966676cd
|
|
# Date 2024-09-12 09:13:04 +0200
|
|
# Author Jan Beulich <jbeulich@suse.com>
|
|
# Committer Jan Beulich <jbeulich@suse.com>
|
|
x86/HVM: properly reject "indirect" VRAM writes
|
|
|
|
While ->count will only be different from 1 for "indirect" (data in
|
|
guest memory) accesses, it being 1 does not exclude the request being an
|
|
"indirect" one. Check both to be on the safe side, and bring the ->count
|
|
part also in line with what ioreq_send_buffered() actually refuses to
|
|
handle.
|
|
|
|
Fixes: 3bbaaec09b1b ("x86/hvm: unify stdvga mmio intercept with standard mmio intercept")
|
|
Signed-off-by: Jan Beulich <jbeulich@suse.com>
|
|
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
|
|
|
|
--- a/xen/arch/x86/hvm/stdvga.c
|
|
+++ b/xen/arch/x86/hvm/stdvga.c
|
|
@@ -530,14 +530,14 @@ static bool cf_check stdvga_mem_accept(
|
|
|
|
spin_lock(&s->lock);
|
|
|
|
- if ( p->dir == IOREQ_WRITE && p->count > 1 )
|
|
+ if ( p->dir == IOREQ_WRITE && (p->data_is_ptr || p->count != 1) )
|
|
{
|
|
/*
|
|
* We cannot return X86EMUL_UNHANDLEABLE on anything other then the
|
|
* first cycle of an I/O. So, since we cannot guarantee to always be
|
|
* able to send buffered writes, we have to reject any multi-cycle
|
|
- * I/O and, since we are rejecting an I/O, we must invalidate the
|
|
- * cache.
|
|
+ * or "indirect" I/O and, since we are rejecting an I/O, we must
|
|
+ * invalidate the cache.
|
|
* Single-cycle write transactions are accepted even if the cache is
|
|
* not active since we can assert, when in stdvga mode, that writes
|
|
* to VRAM have no side effect and thus we can try to buffer them.
|