[xen stable-4.9] x86/HVM: suppress I/O completion for port output

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[xen stable-4.9] x86/HVM: suppress I/O completion for port output

patchbot
commit dc527ffb2b2a1ce129e1dccc3e5eb4921c03def5
Author:     Jan Beulich <[hidden email]>
AuthorDate: Wed Apr 18 16:42:17 2018 +0200
Commit:     Jan Beulich <[hidden email]>
CommitDate: Wed Apr 18 16:42:17 2018 +0200

    x86/HVM: suppress I/O completion for port output
   
    We don't break up port requests in case they cross emulation entity
    boundaries, and a write to an I/O port is necessarily the last
    operation of an instruction instance, so there's no need to re-invoke
    the full emulation path upon receiving the result from an external
    emulator.
   
    In case we want to properly split port accesses in the future, this
    change will need to be reverted, as it would prevent things working
    correctly when e.g. the first part needs to go to an external emulator,
    while the second part is to be handled internally.
   
    While this addresses the reported problem of Windows paging out the
    buffer underneath an in-process REP OUTS, it does not address the wider
    problem of the re-issued insn (to the insn emulator) being prone to
    raise an exception (#PF) during a replayed, previously successful memory
    access (we only record prior MMIO accesses).
   
    Leaving aside the problem tried to be worked around here, I think the
    performance aspect alone is a good reason to change the behavior.
   
    Also take the opportunity and change bool_t -> bool as
    hvm_vcpu_io_need_completion()'s return type.
   
    Signed-off-by: Jan Beulich <[hidden email]>
    Reviewed-by: Paul Durrant <[hidden email]>
    master commit: 91afb8139f954a06e564d4915bc7d6a8575e2812
    master date: 2018-04-11 10:42:24 +0200
---
 xen/arch/x86/hvm/emulate.c     | 6 +++++-
 xen/include/asm-x86/hvm/vcpu.h | 6 ++++--
 2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 9ce5ae0f9f..432b28e3e3 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -279,7 +279,11 @@ static int hvmemul_do_io(
             rc = hvm_send_ioreq(s, &p, 0);
             if ( rc != X86EMUL_RETRY || currd->is_shutting_down )
                 vio->io_req.state = STATE_IOREQ_NONE;
-            else if ( data_is_addr )
+            /*
+             * This effectively is !hvm_vcpu_io_need_completion(vio), slightly
+             * optimized and using local variables we have available.
+             */
+            else if ( data_is_addr || (!is_mmio && dir == IOREQ_WRITE) )
                 rc = X86EMUL_OKAY;
         }
         break;
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index 6c54773f1c..a9469a88f0 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -91,10 +91,12 @@ struct hvm_vcpu_io {
     const struct g2m_ioport *g2m_ioport;
 };
 
-static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
+static inline bool hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
 {
     return (vio->io_req.state == STATE_IOREQ_READY) &&
-           !vio->io_req.data_is_ptr;
+           !vio->io_req.data_is_ptr &&
+           (vio->io_req.type != IOREQ_TYPE_PIO ||
+            vio->io_req.dir != IOREQ_WRITE);
 }
 
 struct nestedvcpu {
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.9

_______________________________________________
Xen-changelog mailing list
[hidden email]
https://lists.xenproject.org/xen-changelog