Conversation
ea13cdf to
d484b70
Compare
|
This PR brings singlestep support for Xen, and implements the I'm still having an issue to listen on multiple VCPU, I only receive events from VCPU 0 |
src/driver/xen.rs
Outdated
| // enable singlestep monitoring | ||
| // it will only intercept events when explicitely requested using | ||
| // xc_domain_debug_control() | ||
| // TODO: call get_vcpu_count() |
There was a problem hiding this comment.
That comment is obsolete afais, right?
There was a problem hiding this comment.
Actually no.
I copy pasted the get_vcpu_count() implementation here, because if i created the Xen struct below and tried to call get_vcpu_count() afterwards, it had an issue with xc being moved above when constructing the Xen struct.
As I didn't have any solution, I used this.
I suggest we should make a function get_vcpu_count(xc: &Xc) and call it from the trait and the Xen driver init function
There was a problem hiding this comment.
The problem was that I copy pasted a chunk of code, so there was duplication.
I we update the definition of get_vcpu_count one day, the other one becomes obselete, etc.
This works fine for me.
Github won't diplay your link: Non-Image content-type returned
There was a problem hiding this comment.
There was a problem hiding this comment.
oh.
I don't know I was seeing having a mutable Xen struct as a bad idea.
it's not, since it's restricted on the init function anyway.
I refactored and implemented your changes, thank you
d484b70 to
0d3f229
Compare
Codecov Report
@@ Coverage Diff @@
## master #150 +/- ##
==========================================
- Coverage 13.95% 13.84% -0.12%
==========================================
Files 5 5
Lines 480 484 +4
Branches 71 71
==========================================
Hits 67 67
- Misses 396 400 +4
Partials 17 17
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
a52c35f to
8d0dcae
Compare
|
One issue that needs a fix is that the I thought that closing the monitoring and unbinding would clear the queue, but that's not the case apparently. Looking at Libvmi They do a finall pass of listening for events, if any, to clear the queue. |
| match self.xev.xenevtchn_pending()? { | ||
| -1 => { | ||
| // no event channel port is pending | ||
| // TODO: Err |
There was a problem hiding this comment.
Is there a reason why you haven't implemented this yet? Feels to me like it should be done in this PR.
There was a problem hiding this comment.
Because this means creating custom error types, and I'm not feeling confident / knowleadgable enough on the topic to implement this.
There was a problem hiding this comment.
Yeah, the overall error handling in this library is not very good. I once tried to refactor it, but got discouraged after realizing how much I would need to change.
There was a problem hiding this comment.
At some point we should create a custom error type (or perhaps even multiple, one for each driver) and implement error wrapping instead of error boxing. This is the way. ;)
There was a problem hiding this comment.
I'm totally in favor of something like this.
I'm simply lacking the time to invest in learning state of the art Rust, and to reflect the changes in libmicrovmi.
But my heart breaks every time I call unwrap() in libmicrovmi simply because I'm a noob in error handling.
src/driver/xen.rs
Outdated
| } | ||
| }; | ||
| let back_ring_ptr = &mut self.back_ring; | ||
| match RING_HAS_UNCONSUMED_REQUESTS!(back_ring_ptr) != 0 { |
There was a problem hiding this comment.
Pattern matching is overkill for a simple boolean check. Also, didn't we come to the conclusion that all those macros could have been simple functions?
There was a problem hiding this comment.
Also, didn't we come to the conclusion that all those macros could have been simple functions?
Probably, but my focus right now is to make it work.
I'm sure we can improve afterwards.
src/driver/xen.rs
Outdated
| let mut rsp: vm_event_response_t = | ||
| unsafe { mem::MaybeUninit::<vm_event_response_t>::zeroed().assume_init() }; | ||
| rsp.reason = req.reason; | ||
| rsp.version = VM_EVENT_INTERFACE_VERSION; | ||
| rsp.vcpu_id = req.vcpu_id; | ||
| rsp.flags = req.flags & add_flags; |
There was a problem hiding this comment.
| let mut rsp: vm_event_response_t = | |
| unsafe { mem::MaybeUninit::<vm_event_response_t>::zeroed().assume_init() }; | |
| rsp.reason = req.reason; | |
| rsp.version = VM_EVENT_INTERFACE_VERSION; | |
| rsp.vcpu_id = req.vcpu_id; | |
| rsp.flags = req.flags & add_flags; | |
| let mut rsp = vm_event_response_t { | |
| reason: req.reason, | |
| version: VM_EVENT_INTERFACE_VERSION, | |
| vcpu_id: req.vcpu_id, | |
| flags: req.flags & add_flags, | |
| ..Default::default() | |
| }; |
src/driver/xen.rs
Outdated
| let op: u32 = match enabled { | ||
| false => XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF, | ||
| true => XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON, | ||
| }; |
There was a problem hiding this comment.
| let op: u32 = match enabled { | |
| false => XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF, | |
| true => XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON, | |
| }; | |
| let op: u32 = if enabled { | |
| XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON | |
| } else { | |
| XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF | |
| }; |
|
the latest commits implemented your fixes @rageagainsthepc I also implemented a cleanup of the ring when we close the Xen driver. My domain is still frozen. |
cd4d305 to
e12a4c0
Compare
The cleanup code looks good to me. 🤔 |
|
So, latest update. the domain survives now, but only if it has 1 VCPU. I added a |
|
Here is the compile with |
What of it? Do you use it as a "control group" to see what we did differently? Does it work without a problem? |
|
Yes, it is a small example, catching one singlestep and quit, with prints on any Xen API calls. Yes, it works. |



Continuing #113