Error accessing memory mapped by xenforeignmemory_map()

classic Classic list List threaded Threaded
16 messages Options
Reply | Threaded
Open this post in threaded view
|

Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
I'm trying to use the "xenforeignmemory" library to read arbitrary
memory ranges from a Xen domain. The code performing the reads is
designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
currently testing in QEMU. I constructed a simple test program, which
reads an arbitrary domid/address pair from the command line, converts
the address (assumed to be physical) to a page frame number, and uses
xenforeignmemory_map() to map the page into the test app's virtual
memory space. Although xenforeignmemory_map() returns a non-NULL
pointer, my attempt to dereference it fails with the following error:

(XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
gpa=0x00000030555000
[   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
at 0x0000007f965f7000
Bus error

It's not clear to me which address is causing the fault: the (dom0)
guest-virtual (0x7f965f7000), the guest-physical (0x30555000), or the
arbitrary physical address I'm attempting to map (not shown)? The
guest-virtual address is the one returned by the mmap() call buried
within xenforeignmemory_map(), so I don't have any control over it. I'm
not an ARM expert, but my understanding of the "ttbr address size" fault
is that it's generated when a physical address that exceeds ranges
defined in one of the control registers involved in page table lookups
is placed on the address bus. I'm not in any way modifying the page
tables constructed by the xilinx linux kernel, so it seems odd that
mmap() would be allocating the buffer at an illegal address.

My ultimate goal is to map physical addresses from a user domain into
dom0, but for now, I'm simply trying to map physical addresses from dom0
itself. (I'm assuming the attempt to pass domid==0 in the call to
xenforeignmemory_map() would have generated an error if mapping dom0's
memory space were not supported.) The idea is to be able to read
kernel code/data mapped at fixed (physical) addresses in a guest.

First of all, I'd like to know whether what I'm attempting to do is
valid: i.e., can I use xenforeignmemory_map() to read an arbitrary page
(specified by guest-physical page number) in an arbitrary guest domain
(including but not limited to dom0)? If the concept is valid, is there
perhaps something I need to do with the pointer returned by
xenforeignmemory_map() before attempting to dereference? (I noticed a
post-processing call to some sort of "normalise_page" function in at
least one xen tool that uses xenforeignmemory_map(), which made me
wonder whether there might be scenarios in which the buffer returned by
xenforeignmemory_map() was not ready for immediate use.)

I'd appreciate any insight anyone can provide into any of this. I've
been unable to find much documentation on use of the "foreignmemory"
interface. Links to documentation and/or a project that uses the
"foreignmemory" interface would be greatly appreciated. I considered
posting to xen-devel, but I wanted to be sure that what I was trying to
do made sense before reporting a possible bug...

Thanks,
Brett S.

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Roger Pau Monné-3
Adding the ARM maintainers.

On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:

> I'm trying to use the "xenforeignmemory" library to read arbitrary
> memory ranges from a Xen domain. The code performing the reads is
> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
> currently testing in QEMU. I constructed a simple test program, which
> reads an arbitrary domid/address pair from the command line, converts
> the address (assumed to be physical) to a page frame number, and uses
> xenforeignmemory_map() to map the page into the test app's virtual
> memory space. Although xenforeignmemory_map() returns a non-NULL
> pointer, my attempt to dereference it fails with the following error:
>
> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
> gpa=0x00000030555000
> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
> at 0x0000007f965f7000
> Bus error

I'm not sure what a Bus error means on ARM, have you tried to look
at traps.c:2508 to see if there's some comment explaining why this
fault is triggered?

I'm not sure the xenforeigmemory library is used on ARM, since IIRC on
x86 that's mainly used for QEMU device emulation, which is not done
for ARM. There are examples of guest memory mappings on tools/libxc/,
for example xc_dom_boot.c, although that's using the
xc_map_foreign_ranges interface.

Roger.

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]> wrote:

> Adding the ARM maintainers.
>
> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>> memory ranges from a Xen domain. The code performing the reads is
>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>> currently testing in QEMU. I constructed a simple test program, which
>> reads an arbitrary domid/address pair from the command line, converts
>> the address (assumed to be physical) to a page frame number, and uses
>> xenforeignmemory_map() to map the page into the test app's virtual
>> memory space. Although xenforeignmemory_map() returns a non-NULL
>> pointer, my attempt to dereference it fails with the following error:
>>
>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>> gpa=0x00000030555000
>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>> at 0x0000007f965f7000
>> Bus error
>
> I'm not sure what a Bus error means on ARM, have you tried to look
> at traps.c:2508 to see if there's some comment explaining why this
> fault is triggered?

I believe the fault is occurring because mmap() failed to map the page.
Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
code comments indicate that this does not imply success: page-level
errors might still be returned in the provided "err" array. In my case,
it appears that an EINVAL is produced by mmap(): specifically, I believe
it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
there are a number of conditions that can produce this error code, and I
haven't yet determined which is to blame...

So although I'm not sure why I would get an "address size" fault, it
makes sense that the pointer dereference would generate some sort of
paging-related fault, given that the page mapping was unsuccessful.
Hopefully, ARM developers will be able to explain why it was
unsuccessful, or at least give me an idea of what sorts of things could
cause a mapping attempt to fail... At this point, I'm not particular
about what address I map. I just want to be able to read known data at a
fixed (non-paged) address (e.g., kernel code/data), so I can prove to
myself that the page is actually mapped.

>
> I'm not sure the xenforeigmemory library is used on ARM, since IIRC on
> x86 that's mainly used for QEMU device emulation, which is not done
> for ARM. There are examples of guest memory mappings on tools/libxc/,
> for example xc_dom_boot.c, although that's using the
> xc_map_foreign_ranges interface.

Thanks. I'll have a look at this...
Brett S.

>
> Roger.

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Stefano Stabellini-4
CC'ing the tools Maintainers and Paul

On Fri, 27 Oct 2017, Brett Stahlman wrote:

> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]> wrote:
> > Adding the ARM maintainers.
> >
> > On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
> >> I'm trying to use the "xenforeignmemory" library to read arbitrary
> >> memory ranges from a Xen domain. The code performing the reads is
> >> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
> >> currently testing in QEMU. I constructed a simple test program, which
> >> reads an arbitrary domid/address pair from the command line, converts
> >> the address (assumed to be physical) to a page frame number, and uses
> >> xenforeignmemory_map() to map the page into the test app's virtual
> >> memory space. Although xenforeignmemory_map() returns a non-NULL
> >> pointer, my attempt to dereference it fails with the following error:
> >>
> >> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
> >> gpa=0x00000030555000
> >>
> >> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
> >> at 0x0000007f965f7000
> >> Bus error
> >
> > I'm not sure what a Bus error means on ARM, have you tried to look
> > at traps.c:2508 to see if there's some comment explaining why this
> > fault is triggered?
>
> I believe the fault is occurring because mmap() failed to map the page.
> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
> code comments indicate that this does not imply success: page-level
> errors might still be returned in the provided "err" array. In my case,
> it appears that an EINVAL is produced by mmap(): specifically, I believe
> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
> there are a number of conditions that can produce this error code, and I
> haven't yet determined which is to blame...
>
> So although I'm not sure why I would get an "address size" fault, it
> makes sense that the pointer dereference would generate some sort of
> paging-related fault, given that the page mapping was unsuccessful.
> Hopefully, ARM developers will be able to explain why it was
> unsuccessful, or at least give me an idea of what sorts of things could
> cause a mapping attempt to fail... At this point, I'm not particular
> about what address I map. I just want to be able to read known data at a
> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
> myself that the page is actually mapped.
The fault means "Data Abort from a lower Exception level". It could be
an MMU fault or an alignment fault, according to the ARM ARM.

I guess that the address range is not good. What DomU addresses are you
trying to map?



> > I'm not sure the xenforeigmemory library is used on ARM, since IIRC on
> > x86 that's mainly used for QEMU device emulation, which is not done
> > for ARM. There are examples of guest memory mappings on tools/libxc/,
> > for example xc_dom_boot.c, although that's using the
> > xc_map_foreign_ranges interface.
>
> Thanks. I'll have a look at this...
> Brett S.
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
<[hidden email]> wrote:

> CC'ing the tools Maintainers and Paul
>
> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]> wrote:
>> > Adding the ARM maintainers.
>> >
>> > On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>> >> I'm trying to use the "xenforeignmemory" library to read arbitrary
>> >> memory ranges from a Xen domain. The code performing the reads is
>> >> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>> >> currently testing in QEMU. I constructed a simple test program, which
>> >> reads an arbitrary domid/address pair from the command line, converts
>> >> the address (assumed to be physical) to a page frame number, and uses
>> >> xenforeignmemory_map() to map the page into the test app's virtual
>> >> memory space. Although xenforeignmemory_map() returns a non-NULL
>> >> pointer, my attempt to dereference it fails with the following error:
>> >>
>> >> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>> >> gpa=0x00000030555000
>> >>
>> >> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>> >> at 0x0000007f965f7000
>> >> Bus error
>> >
>> > I'm not sure what a Bus error means on ARM, have you tried to look
>> > at traps.c:2508 to see if there's some comment explaining why this
>> > fault is triggered?
>>
>> I believe the fault is occurring because mmap() failed to map the page.
>> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
>> code comments indicate that this does not imply success: page-level
>> errors might still be returned in the provided "err" array. In my case,
>> it appears that an EINVAL is produced by mmap(): specifically, I believe
>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
>> there are a number of conditions that can produce this error code, and I
>> haven't yet determined which is to blame...
>>
>> So although I'm not sure why I would get an "address size" fault, it
>> makes sense that the pointer dereference would generate some sort of
>> paging-related fault, given that the page mapping was unsuccessful.
>> Hopefully, ARM developers will be able to explain why it was
>> unsuccessful, or at least give me an idea of what sorts of things could
>> cause a mapping attempt to fail... At this point, I'm not particular
>> about what address I map. I just want to be able to read known data at a
>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>> myself that the page is actually mapped.
>
> The fault means "Data Abort from a lower Exception level". It could be
> an MMU fault or an alignment fault, according to the ARM ARM.
>
> I guess that the address range is not good. What DomU addresses are you
> trying to map?

The intent was to map fixed "guest physical" addresses corresponding to
(e.g) the "zero page" of a guest's running kernel. Up until today, I'd
assumed that a PV guest's kernel would be loaded at a known "guest
physical" address (like 0x100000 on i386), and that such addresses
corresponded to the gfn's expected by xenforeignmemory_map(). But now I
suspect this was an incorrect assumption, at least for the PV case. I've
had trouble finding relevant documentation on the Xen site, but I did
find a presentation earlier today suggesting that for PV's, gfn == mfn,
which IIUC, would effectively preclude the use of fixed addresses in a
PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
a "known" address (e.g., 0x100000 on i386).

Perhaps my use case (reading a guest kernel's code/data from dom0) makes
sense for an HVM, but not a PV? Is it not possible for dom0 to use the
foreignmemory interface to map PV guest pages read-only, without knowing
in advance what, if anything, those pages represent in the guest? Or is
the problem that the very concept of "guest physical" doesn't exist in a
PV? I guess it would help if I had a better understanding of what sort
of frame numbers are expected by xenforeignmemory_map() when the target
VM is a PV. Is the Xen code the only documentation for this sort of
thing, or is there some place I could get a high-level overview?

Thanks,
Brett S.

>
>
>
>> > I'm not sure the xenforeigmemory library is used on ARM, since IIRC on
>> > x86 that's mainly used for QEMU device emulation, which is not done
>> > for ARM. There are examples of guest memory mappings on tools/libxc/,
>> > for example xc_dom_boot.c, although that's using the
>> > xc_map_foreign_ranges interface.
>>
>> Thanks. I'll have a look at this...
>> Brett S.

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Julien Grall-3
In reply to this post by Roger Pau Monné-3
Hi Roger,

I will answer to rest of Stefano's e-mail.

On 27/10/2017 15:31, Roger Pau Monné wrote:
> I'm not sure the xenforeigmemory library is used on ARM, since IIRC on
> x86 that's mainly used for QEMU device emulation, which is not done
> for ARM. There are examples of guest memory mappings on tools/libxc/,
> for example xc_dom_boot.c, although that's using the
> xc_map_foreign_ranges interface.

xenforeignmemory library is in use by Arm. In fact,
xc_map_foreign_ranges is a wrapper to a function from there.

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Julien Grall-3
In reply to this post by Stefano Stabellini-4
Hello,

On 27/10/2017 21:22, Stefano Stabellini wrote:

> CC'ing the tools Maintainers and Paul
>
> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]> wrote:
>>> Adding the ARM maintainers.
>>>
>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>> memory ranges from a Xen domain. The code performing the reads is
>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>>>> currently testing in QEMU. I constructed a simple test program, which
>>>> reads an arbitrary domid/address pair from the command line, converts
>>>> the address (assumed to be physical) to a page frame number, and uses
>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>> pointer, my attempt to dereference it fails with the following error:
>>>>
>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>> gpa=0x00000030555000
>>>>
>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>> at 0x0000007f965f7000
>>>> Bus error
>>>
>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>> at traps.c:2508 to see if there's some comment explaining why this
>>> fault is triggered?
>>
>> I believe the fault is occurring because mmap() failed to map the page.
>> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
>> code comments indicate that this does not imply success: page-level
>> errors might still be returned in the provided "err" array. In my case,
>> it appears that an EINVAL is produced by mmap(): specifically, I believe
>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
>> there are a number of conditions that can produce this error code, and I
>> haven't yet determined which is to blame... >>
>> So although I'm not sure why I would get an "address size" fault, it
For Arm64 guests, when Xen is receive a data abort from the guest (i.e
there are a problem with stage-2 mapping) and unable to handle it, then
a "address size" fault will be injected to the guest.

I agree this is really confusing and because Xen does not populate the
FSC (Fault Status Code Error) in HSR_EL1.

Looking at Arm32, we always inject as a debug exception. This is a bit
better.

We at least need to improve the fault for Arm64 guest, maybe by using
"synchronous external abort" (0b010000). I will send a patch for that.

>> makes sense that the pointer dereference would generate some sort of
>> paging-related fault, given that the page mapping was unsuccessful.
>> Hopefully, ARM developers will be able to explain why it was
>> unsuccessful, or at least give me an idea of what sorts of things could
>> cause a mapping attempt to fail... At this point, I'm not particular
>> about what address I map. I just want to be able to read known data at a
>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>> myself that the page is actually mapped.
>
> The fault means "Data Abort from a lower Exception level". It could be
> an MMU fault or an alignment fault, according to the ARM ARM.

Per D4.7.3 in ARM DDI 0487B.a, alignment fault will be taken as stage-1
fault and hence not received by the Xen.

Furthermore, you can find a bit more information on the fault by
decoding the HSR. From the logs HSR/ESR_EL2 is 0x93810007, so this is a
translation fault level 3. Meaning the page you are trying to access is
not mapped in stage-2.

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Julien Grall-3
In reply to this post by Brett Stahlman
Hello Brett,

On 27/10/2017 22:58, Brett Stahlman wrote:

> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
> <[hidden email]> wrote:
>> CC'ing the tools Maintainers and Paul
>>
>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]> wrote:
>>>> Adding the ARM maintainers.
>>>>
>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>>> memory ranges from a Xen domain. The code performing the reads is
>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>>>>> currently testing in QEMU. I constructed a simple test program, which
>>>>> reads an arbitrary domid/address pair from the command line, converts
>>>>> the address (assumed to be physical) to a page frame number, and uses
>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>>> pointer, my attempt to dereference it fails with the following error:
>>>>>
>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>>> gpa=0x00000030555000
>>>>>
>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>>> at 0x0000007f965f7000
>>>>> Bus error
>>>>
>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>>> at traps.c:2508 to see if there's some comment explaining why this
>>>> fault is triggered?
>>>
>>> I believe the fault is occurring because mmap() failed to map the page.
>>> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
>>> code comments indicate that this does not imply success: page-level
>>> errors might still be returned in the provided "err" array. In my case,
>>> it appears that an EINVAL is produced by mmap(): specifically, I believe
>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
>>> there are a number of conditions that can produce this error code, and I
>>> haven't yet determined which is to blame...
>>>
>>> So although I'm not sure why I would get an "address size" fault, it
>>> makes sense that the pointer dereference would generate some sort of
>>> paging-related fault, given that the page mapping was unsuccessful.
>>> Hopefully, ARM developers will be able to explain why it was
>>> unsuccessful, or at least give me an idea of what sorts of things could
>>> cause a mapping attempt to fail... At this point, I'm not particular
>>> about what address I map. I just want to be able to read known data at a
>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>>> myself that the page is actually mapped.
>>
>> The fault means "Data Abort from a lower Exception level". It could be
>> an MMU fault or an alignment fault, according to the ARM ARM.
>>
>> I guess that the address range is not good. What DomU addresses are you
>> trying to map?
>
> The intent was to map fixed "guest physical" addresses corresponding to
> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd

What do you mean by "zero page"? Is it the guest physical address 0? If
so, the current guest memory layout does not have anything mapped at the
address.

> assumed that a PV guest's kernel would be loaded at a known "guest
> physical" address (like 0x100000 on i386), and that such addresses
> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
> suspect this was an incorrect assumption, at least for the PV case. I've
> had trouble finding relevant documentation on the Xen site, but I did
> find a presentation earlier today suggesting that for PV's, gfn == mfn,
> which IIUC, would effectively preclude the use of fixed addresses in a
> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
> a "known" address (e.g., 0x100000 on i386).
>
> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
> foreignmemory interface to map PV guest pages read-only, without knowing
> in advance what, if anything, those pages represent in the guest? Or is
> the problem that the very concept of "guest physical" doesn't exist in a
> PV? I guess it would help if I had a better understanding of what sort
> of frame numbers are expected by xenforeignmemory_map() when the target
> VM is a PV. Is the Xen code the only documentation for this sort of
> thing, or is there some place I could get a high-level overview?

I am a bit confused with the rest of this e-mail. There are no concept
of HVM or PV on Arm. This is x86 specific. For Arm, there is a single
type of guest that borrow the goods of both HVM and PV.

For instance, like HVM, the hardware is used to provide a separate
address space for each virtual machine. Arm calls that stage-2
translation. So gfn != mfn.

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
Hello Julien,

On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]> wrote:

> Hello Brett,
>
> On 27/10/2017 22:58, Brett Stahlman wrote:
>>
>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
>> <[hidden email]> wrote:
>>>
>>> CC'ing the tools Maintainers and Paul
>>>
>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>>>>
>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]>
>>>> wrote:
>>>>>
>>>>> Adding the ARM maintainers.
>>>>>
>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>>>>
>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>>>> memory ranges from a Xen domain. The code performing the reads is
>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>>>>>> currently testing in QEMU. I constructed a simple test program, which
>>>>>> reads an arbitrary domid/address pair from the command line, converts
>>>>>> the address (assumed to be physical) to a page frame number, and uses
>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>>>> pointer, my attempt to dereference it fails with the following error:
>>>>>>
>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>>>> gpa=0x00000030555000
>>>>>>
>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>>>> at 0x0000007f965f7000
>>>>>> Bus error
>>>>>
>>>>>
>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>>>> at traps.c:2508 to see if there's some comment explaining why this
>>>>> fault is triggered?
>>>>
>>>>
>>>> I believe the fault is occurring because mmap() failed to map the page.
>>>> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
>>>> code comments indicate that this does not imply success: page-level
>>>> errors might still be returned in the provided "err" array. In my case,
>>>> it appears that an EINVAL is produced by mmap(): specifically, I believe
>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
>>>> there are a number of conditions that can produce this error code, and I
>>>> haven't yet determined which is to blame...
>>>>
>>>> So although I'm not sure why I would get an "address size" fault, it
>>>> makes sense that the pointer dereference would generate some sort of
>>>> paging-related fault, given that the page mapping was unsuccessful.
>>>> Hopefully, ARM developers will be able to explain why it was
>>>> unsuccessful, or at least give me an idea of what sorts of things could
>>>> cause a mapping attempt to fail... At this point, I'm not particular
>>>> about what address I map. I just want to be able to read known data at a
>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>>>> myself that the page is actually mapped.
>>>
>>>
>>> The fault means "Data Abort from a lower Exception level". It could be
>>> an MMU fault or an alignment fault, according to the ARM ARM.
>>>
>>> I guess that the address range is not good. What DomU addresses are you
>>> trying to map?
>>
>>
>> The intent was to map fixed "guest physical" addresses corresponding to
>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
>
>
> What do you mean by "zero page"? Is it the guest physical address 0? If so,
> the current guest memory layout does not have anything mapped at the
> address.

No. I didn't mean guest physical address 0, but rather the start of the
linux kernel itself: specifically, the code in head.S. IIUC, the kernel
bootstrap code responsible for decompressing the kernel typically loads
this code at a fixed address, which on x86 architectures, happens to be
0x100000. Thus, my assumption has been that if an unmodified Linux OS
were run in an x86 Xen guest, Xen would need to map guest physical
address 0x100000 to the machine physical address where the guest Linux
kernel is actually loaded. I'd also been assuming that if code running
in dom0 wished to use the foreignmemory interface to read the first page
of such a guest's kernel, it would need to provide the "guest physical"
address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
be true for an *unmodified* guest (i.e., HVM), but having read more
about Xen's paravirtualized memory over the weekend, I'm thinking it
would not hold true for a paravirtualized (PV) guest, which doesn't have
the same concept of "guest physical" addresses.

>
>> assumed that a PV guest's kernel would be loaded at a known "guest
>> physical" address (like 0x100000 on i386), and that such addresses
>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
>> suspect this was an incorrect assumption, at least for the PV case. I've
>> had trouble finding relevant documentation on the Xen site, but I did
>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
>> which IIUC, would effectively preclude the use of fixed addresses in a
>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
>> a "known" address (e.g., 0x100000 on i386).
>>
>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
>> foreignmemory interface to map PV guest pages read-only, without knowing
>> in advance what, if anything, those pages represent in the guest? Or is
>> the problem that the very concept of "guest physical" doesn't exist in a
>> PV? I guess it would help if I had a better understanding of what sort
>> of frame numbers are expected by xenforeignmemory_map() when the target
>> VM is a PV. Is the Xen code the only documentation for this sort of
>> thing, or is there some place I could get a high-level overview?
>
>
> I am a bit confused with the rest of this e-mail. There are no concept of
> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type of
> guest that borrow the goods of both HVM and PV.
>
> For instance, like HVM, the hardware is used to provide a separate address
> space for each virtual machine. Arm calls that stage-2 translation. So gfn
> != mfn.

I was not aware that the HVM/PV concept didn't apply directly to ARM.
Is there a document that summarizes the way Xen's address translation
works on ARM? The document I've been looking at is...

https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management

...but I haven't found anything analogous for ARM. At any rate, if the
ARM hardware is providing a separate address space for each VM, then I
suppose the concept of "guest physical" addresses is still valid. Does a
guest physical address correspond to the output of stage-1 translation?
And are guest kernels on ARM generally loaded at fixed addresses (like
0x100000 in the x86 case), or is the kernel load address determined
dynamically?


Thanks,
Brett S.

>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Julien Grall-3
On 30/10/17 16:26, Brett Stahlman wrote:
> Hello Julien,

Hello Brett,

> On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]> wrote:
>> Hello Brett,
>>
>> On 27/10/2017 22:58, Brett Stahlman wrote:
>>>
>>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
>>> <[hidden email]> wrote:
>>>>
>>>> CC'ing the tools Maintainers and Paul
>>>>
>>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>>>>>
>>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné <[hidden email]>
>>>>> wrote:
>>>>>>
>>>>>> Adding the ARM maintainers.
>>>>>>
>>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>>>>>
>>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>>>>> memory ranges from a Xen domain. The code performing the reads is
>>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though I'm
>>>>>>> currently testing in QEMU. I constructed a simple test program, which
>>>>>>> reads an arbitrary domid/address pair from the command line, converts
>>>>>>> the address (assumed to be physical) to a page frame number, and uses
>>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>>>>> pointer, my attempt to dereference it fails with the following error:
>>>>>>>
>>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>>>>> gpa=0x00000030555000
>>>>>>>
>>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>>>>> at 0x0000007f965f7000
>>>>>>> Bus error
>>>>>>
>>>>>>
>>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>>>>> at traps.c:2508 to see if there's some comment explaining why this
>>>>>> fault is triggered?
>>>>>
>>>>>
>>>>> I believe the fault is occurring because mmap() failed to map the page.
>>>>> Although xenforeignmemory_map() is indeed returning a non-NULL pointer,
>>>>> code comments indicate that this does not imply success: page-level
>>>>> errors might still be returned in the provided "err" array. In my case,
>>>>> it appears that an EINVAL is produced by mmap(): specifically, I believe
>>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c), but
>>>>> there are a number of conditions that can produce this error code, and I
>>>>> haven't yet determined which is to blame...
>>>>>
>>>>> So although I'm not sure why I would get an "address size" fault, it
>>>>> makes sense that the pointer dereference would generate some sort of
>>>>> paging-related fault, given that the page mapping was unsuccessful.
>>>>> Hopefully, ARM developers will be able to explain why it was
>>>>> unsuccessful, or at least give me an idea of what sorts of things could
>>>>> cause a mapping attempt to fail... At this point, I'm not particular
>>>>> about what address I map. I just want to be able to read known data at a
>>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>>>>> myself that the page is actually mapped.
>>>>
>>>>
>>>> The fault means "Data Abort from a lower Exception level". It could be
>>>> an MMU fault or an alignment fault, according to the ARM ARM.
>>>>
>>>> I guess that the address range is not good. What DomU addresses are you
>>>> trying to map?
>>>
>>>
>>> The intent was to map fixed "guest physical" addresses corresponding to
>>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
>>
>>
>> What do you mean by "zero page"? Is it the guest physical address 0? If so,
>> the current guest memory layout does not have anything mapped at the
>> address.
>
> No. I didn't mean guest physical address 0, but rather the start of the
> linux kernel itself: specifically, the code in head.S. IIUC, the kernel
> bootstrap code responsible for decompressing the kernel typically loads
> this code at a fixed address, which on x86 architectures, happens to be
> 0x100000. Thus, my assumption has been that if an unmodified Linux OS
> were run in an x86 Xen guest, Xen would need to map guest physical
> address 0x100000 to the machine physical address where the guest Linux
> kernel is actually loaded. I'd also been assuming that if code running
> in dom0 wished to use the foreignmemory interface to read the first page
> of such a guest's kernel, it would need to provide the "guest physical"
> address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
> be true for an *unmodified* guest (i.e., HVM), but having read more
> about Xen's paravirtualized memory over the weekend, I'm thinking it
> would not hold true for a paravirtualized (PV) guest, which doesn't have
> the same concept of "guest physical" addresses.

I am not x86 expert and will let x86 folks answered to that.

To give the Arm64 view, the image headers allow to specify whether the
kernel needs to be close to the beginning of the DRAM. But it is still
not a fixed address.

In practice, the toolstack will always load the Image at the ram base +
text offset (specified in the kernel). But that's just for convenience
and may change in the future.

>
>>
>>> assumed that a PV guest's kernel would be loaded at a known "guest
>>> physical" address (like 0x100000 on i386), and that such addresses
>>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
>>> suspect this was an incorrect assumption, at least for the PV case. I've
>>> had trouble finding relevant documentation on the Xen site, but I did
>>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
>>> which IIUC, would effectively preclude the use of fixed addresses in a
>>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
>>> a "known" address (e.g., 0x100000 on i386).
>>>
>>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
>>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
>>> foreignmemory interface to map PV guest pages read-only, without knowing
>>> in advance what, if anything, those pages represent in the guest? Or is
>>> the problem that the very concept of "guest physical" doesn't exist in a
>>> PV? I guess it would help if I had a better understanding of what sort
>>> of frame numbers are expected by xenforeignmemory_map() when the target
>>> VM is a PV. Is the Xen code the only documentation for this sort of
>>> thing, or is there some place I could get a high-level overview?
>>
>>
>> I am a bit confused with the rest of this e-mail. There are no concept of
>> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type of
>> guest that borrow the goods of both HVM and PV.
>>
>> For instance, like HVM, the hardware is used to provide a separate address
>> space for each virtual machine. Arm calls that stage-2 translation. So gfn
>> != mfn.
>
> I was not aware that the HVM/PV concept didn't apply directly to ARM.
> Is there a document that summarizes the way Xen's address translation
> works on ARM? The document I've been looking at is...
>
> https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management

The closest I would find explaining Xen's address translation scheme
would be my talk at Xen Developer Summit:

https://www.slideshare.net/xen_com_mgr/keeping-coherency-on-arm-julien-grall-arm

The address translation is very simple and only follow the scheme
introduced by the Arm Arm.

>
> ...but I haven't found anything analogous for ARM. At any rate, if the
> ARM hardware is providing a separate address space for each VM, then I
> suppose the concept of "guest physical" addresses is still valid. Does a
> guest physical address correspond to the output of stage-1 translation?

Yes.

> And are guest kernels on ARM generally loaded at fixed addresses (like
> 0x100000 in the x86 case), or is the kernel load address determined
> dynamically?

See my answer above.

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
Julien,

On Mon, Oct 30, 2017 at 1:37 PM, Julien Grall <[hidden email]> wrote:

> On 30/10/17 16:26, Brett Stahlman wrote:
>>
>> Hello Julien,
>
>
> Hello Brett,
>
>> On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]>
>> wrote:
>>>
>>> Hello Brett,
>>>
>>> On 27/10/2017 22:58, Brett Stahlman wrote:
>>>>
>>>>
>>>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
>>>> <[hidden email]> wrote:
>>>>>
>>>>>
>>>>> CC'ing the tools Maintainers and Paul
>>>>>
>>>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné
>>>>>> <[hidden email]>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Adding the ARM maintainers.
>>>>>>>
>>>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>>>>>> memory ranges from a Xen domain. The code performing the reads is
>>>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though
>>>>>>>> I'm
>>>>>>>> currently testing in QEMU. I constructed a simple test program,
>>>>>>>> which
>>>>>>>> reads an arbitrary domid/address pair from the command line,
>>>>>>>> converts
>>>>>>>> the address (assumed to be physical) to a page frame number, and
>>>>>>>> uses
>>>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>>>>>> pointer, my attempt to dereference it fails with the following
>>>>>>>> error:
>>>>>>>>
>>>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>>>>>> gpa=0x00000030555000
>>>>>>>>
>>>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>>>>>> at 0x0000007f965f7000
>>>>>>>> Bus error
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>>>>>> at traps.c:2508 to see if there's some comment explaining why this
>>>>>>> fault is triggered?
>>>>>>
>>>>>>
>>>>>>
>>>>>> I believe the fault is occurring because mmap() failed to map the
>>>>>> page.
>>>>>> Although xenforeignmemory_map() is indeed returning a non-NULL
>>>>>> pointer,
>>>>>> code comments indicate that this does not imply success: page-level
>>>>>> errors might still be returned in the provided "err" array. In my
>>>>>> case,
>>>>>> it appears that an EINVAL is produced by mmap(): specifically, I
>>>>>> believe
>>>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c),
>>>>>> but
>>>>>> there are a number of conditions that can produce this error code, and
>>>>>> I
>>>>>> haven't yet determined which is to blame...
>>>>>>
>>>>>> So although I'm not sure why I would get an "address size" fault, it
>>>>>> makes sense that the pointer dereference would generate some sort of
>>>>>> paging-related fault, given that the page mapping was unsuccessful.
>>>>>> Hopefully, ARM developers will be able to explain why it was
>>>>>> unsuccessful, or at least give me an idea of what sorts of things
>>>>>> could
>>>>>> cause a mapping attempt to fail... At this point, I'm not particular
>>>>>> about what address I map. I just want to be able to read known data at
>>>>>> a
>>>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>>>>>> myself that the page is actually mapped.
>>>>>
>>>>>
>>>>>
>>>>> The fault means "Data Abort from a lower Exception level". It could be
>>>>> an MMU fault or an alignment fault, according to the ARM ARM.
>>>>>
>>>>> I guess that the address range is not good. What DomU addresses are you
>>>>> trying to map?
>>>>
>>>>
>>>>
>>>> The intent was to map fixed "guest physical" addresses corresponding to
>>>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
>>>
>>>
>>>
>>> What do you mean by "zero page"? Is it the guest physical address 0? If
>>> so,
>>> the current guest memory layout does not have anything mapped at the
>>> address.
>>
>>
>> No. I didn't mean guest physical address 0, but rather the start of the
>> linux kernel itself: specifically, the code in head.S. IIUC, the kernel
>> bootstrap code responsible for decompressing the kernel typically loads
>> this code at a fixed address, which on x86 architectures, happens to be
>> 0x100000. Thus, my assumption has been that if an unmodified Linux OS
>> were run in an x86 Xen guest, Xen would need to map guest physical
>> address 0x100000 to the machine physical address where the guest Linux
>> kernel is actually loaded. I'd also been assuming that if code running
>> in dom0 wished to use the foreignmemory interface to read the first page
>> of such a guest's kernel, it would need to provide the "guest physical"
>> address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
>> be true for an *unmodified* guest (i.e., HVM), but having read more
>> about Xen's paravirtualized memory over the weekend, I'm thinking it
>> would not hold true for a paravirtualized (PV) guest, which doesn't have
>> the same concept of "guest physical" addresses.
>
>
> I am not x86 expert and will let x86 folks answered to that.
>
> To give the Arm64 view, the image headers allow to specify whether the
> kernel needs to be close to the beginning of the DRAM. But it is still not a
> fixed address.
>
> In practice, the toolstack will always load the Image at the ram base + text
> offset (specified in the kernel). But that's just for convenience and may
> change in the future.

Ok. Perhaps it would help if I examined this code...

>
>>
>>>
>>>> assumed that a PV guest's kernel would be loaded at a known "guest
>>>> physical" address (like 0x100000 on i386), and that such addresses
>>>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
>>>> suspect this was an incorrect assumption, at least for the PV case. I've
>>>> had trouble finding relevant documentation on the Xen site, but I did
>>>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
>>>> which IIUC, would effectively preclude the use of fixed addresses in a
>>>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
>>>> a "known" address (e.g., 0x100000 on i386).
>>>>
>>>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
>>>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
>>>> foreignmemory interface to map PV guest pages read-only, without knowing
>>>> in advance what, if anything, those pages represent in the guest? Or is
>>>> the problem that the very concept of "guest physical" doesn't exist in a
>>>> PV? I guess it would help if I had a better understanding of what sort
>>>> of frame numbers are expected by xenforeignmemory_map() when the target
>>>> VM is a PV. Is the Xen code the only documentation for this sort of
>>>> thing, or is there some place I could get a high-level overview?
>>>
>>>
>>>
>>> I am a bit confused with the rest of this e-mail. There are no concept of
>>> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type
>>> of
>>> guest that borrow the goods of both HVM and PV.
>>>
>>> For instance, like HVM, the hardware is used to provide a separate
>>> address
>>> space for each virtual machine. Arm calls that stage-2 translation. So
>>> gfn
>>> != mfn.
>>
>>
>> I was not aware that the HVM/PV concept didn't apply directly to ARM.
>> Is there a document that summarizes the way Xen's address translation
>> works on ARM? The document I've been looking at is...
>>
>> https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management
>
>
> The closest I would find explaining Xen's address translation scheme would
> be my talk at Xen Developer Summit:
>
> https://www.slideshare.net/xen_com_mgr/keeping-coherency-on-arm-julien-grall-arm

After watching your talk and reading through the "Xen ARM with
Virtualization Extensions" whitepaper, I think I have a slightly better
understanding of how Xen ARM handles address translations. So which sort
of address does xenforeignmemory_map() expect?

1. stage 1 input (VA)
2. stage 1 output / stage 2 input (IPA)
3. stage 2 output (PA)

(I'm assuming #1 or #2...)
Thanks,
Brett S.

>
> The address translation is very simple and only follow the scheme introduced
> by the Arm Arm.
>
>>
>> ...but I haven't found anything analogous for ARM. At any rate, if the
>> ARM hardware is providing a separate address space for each VM, then I
>> suppose the concept of "guest physical" addresses is still valid. Does a
>> guest physical address correspond to the output of stage-1 translation?
>
>
> Yes.
>
>> And are guest kernels on ARM generally loaded at fixed addresses (like
>> 0x100000 in the x86 case), or is the kernel load address determined
>> dynamically?
>
>
> See my answer above.
>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Stefano Stabellini-4
On Mon, 30 Oct 2017, Brett Stahlman wrote:

> Julien,
>
> On Mon, Oct 30, 2017 at 1:37 PM, Julien Grall <[hidden email]> wrote:
> > On 30/10/17 16:26, Brett Stahlman wrote:
> >>
> >> Hello Julien,
> >
> >
> > Hello Brett,
> >
> >> On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]>
> >> wrote:
> >>>
> >>> Hello Brett,
> >>>
> >>> On 27/10/2017 22:58, Brett Stahlman wrote:
> >>>>
> >>>>
> >>>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
> >>>> <[hidden email]> wrote:
> >>>>>
> >>>>>
> >>>>> CC'ing the tools Maintainers and Paul
> >>>>>
> >>>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
> >>>>>>
> >>>>>>
> >>>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné
> >>>>>> <[hidden email]>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Adding the ARM maintainers.
> >>>>>>>
> >>>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
> >>>>>>>> memory ranges from a Xen domain. The code performing the reads is
> >>>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though
> >>>>>>>> I'm
> >>>>>>>> currently testing in QEMU. I constructed a simple test program,
> >>>>>>>> which
> >>>>>>>> reads an arbitrary domid/address pair from the command line,
> >>>>>>>> converts
> >>>>>>>> the address (assumed to be physical) to a page frame number, and
> >>>>>>>> uses
> >>>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
> >>>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
> >>>>>>>> pointer, my attempt to dereference it fails with the following
> >>>>>>>> error:
> >>>>>>>>
> >>>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
> >>>>>>>> gpa=0x00000030555000
> >>>>>>>>
> >>>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
> >>>>>>>> at 0x0000007f965f7000
> >>>>>>>> Bus error
> >>>>>>>
> >>>>>>>
> >>>>>>>
> >>>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
> >>>>>>> at traps.c:2508 to see if there's some comment explaining why this
> >>>>>>> fault is triggered?
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> I believe the fault is occurring because mmap() failed to map the
> >>>>>> page.
> >>>>>> Although xenforeignmemory_map() is indeed returning a non-NULL
> >>>>>> pointer,
> >>>>>> code comments indicate that this does not imply success: page-level
> >>>>>> errors might still be returned in the provided "err" array. In my
> >>>>>> case,
> >>>>>> it appears that an EINVAL is produced by mmap(): specifically, I
> >>>>>> believe
> >>>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c),
> >>>>>> but
> >>>>>> there are a number of conditions that can produce this error code, and
> >>>>>> I
> >>>>>> haven't yet determined which is to blame...
> >>>>>>
> >>>>>> So although I'm not sure why I would get an "address size" fault, it
> >>>>>> makes sense that the pointer dereference would generate some sort of
> >>>>>> paging-related fault, given that the page mapping was unsuccessful.
> >>>>>> Hopefully, ARM developers will be able to explain why it was
> >>>>>> unsuccessful, or at least give me an idea of what sorts of things
> >>>>>> could
> >>>>>> cause a mapping attempt to fail... At this point, I'm not particular
> >>>>>> about what address I map. I just want to be able to read known data at
> >>>>>> a
> >>>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
> >>>>>> myself that the page is actually mapped.
> >>>>>
> >>>>>
> >>>>>
> >>>>> The fault means "Data Abort from a lower Exception level". It could be
> >>>>> an MMU fault or an alignment fault, according to the ARM ARM.
> >>>>>
> >>>>> I guess that the address range is not good. What DomU addresses are you
> >>>>> trying to map?
> >>>>
> >>>>
> >>>>
> >>>> The intent was to map fixed "guest physical" addresses corresponding to
> >>>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
> >>>
> >>>
> >>>
> >>> What do you mean by "zero page"? Is it the guest physical address 0? If
> >>> so,
> >>> the current guest memory layout does not have anything mapped at the
> >>> address.
> >>
> >>
> >> No. I didn't mean guest physical address 0, but rather the start of the
> >> linux kernel itself: specifically, the code in head.S. IIUC, the kernel
> >> bootstrap code responsible for decompressing the kernel typically loads
> >> this code at a fixed address, which on x86 architectures, happens to be
> >> 0x100000. Thus, my assumption has been that if an unmodified Linux OS
> >> were run in an x86 Xen guest, Xen would need to map guest physical
> >> address 0x100000 to the machine physical address where the guest Linux
> >> kernel is actually loaded. I'd also been assuming that if code running
> >> in dom0 wished to use the foreignmemory interface to read the first page
> >> of such a guest's kernel, it would need to provide the "guest physical"
> >> address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
> >> be true for an *unmodified* guest (i.e., HVM), but having read more
> >> about Xen's paravirtualized memory over the weekend, I'm thinking it
> >> would not hold true for a paravirtualized (PV) guest, which doesn't have
> >> the same concept of "guest physical" addresses.
> >
> >
> > I am not x86 expert and will let x86 folks answered to that.
> >
> > To give the Arm64 view, the image headers allow to specify whether the
> > kernel needs to be close to the beginning of the DRAM. But it is still not a
> > fixed address.
> >
> > In practice, the toolstack will always load the Image at the ram base + text
> > offset (specified in the kernel). But that's just for convenience and may
> > change in the future.
>
> Ok. Perhaps it would help if I examined this code...
>
> >
> >>
> >>>
> >>>> assumed that a PV guest's kernel would be loaded at a known "guest
> >>>> physical" address (like 0x100000 on i386), and that such addresses
> >>>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
> >>>> suspect this was an incorrect assumption, at least for the PV case. I've
> >>>> had trouble finding relevant documentation on the Xen site, but I did
> >>>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
> >>>> which IIUC, would effectively preclude the use of fixed addresses in a
> >>>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
> >>>> a "known" address (e.g., 0x100000 on i386).
> >>>>
> >>>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
> >>>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
> >>>> foreignmemory interface to map PV guest pages read-only, without knowing
> >>>> in advance what, if anything, those pages represent in the guest? Or is
> >>>> the problem that the very concept of "guest physical" doesn't exist in a
> >>>> PV? I guess it would help if I had a better understanding of what sort
> >>>> of frame numbers are expected by xenforeignmemory_map() when the target
> >>>> VM is a PV. Is the Xen code the only documentation for this sort of
> >>>> thing, or is there some place I could get a high-level overview?
> >>>
> >>>
> >>>
> >>> I am a bit confused with the rest of this e-mail. There are no concept of
> >>> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type
> >>> of
> >>> guest that borrow the goods of both HVM and PV.
> >>>
> >>> For instance, like HVM, the hardware is used to provide a separate
> >>> address
> >>> space for each virtual machine. Arm calls that stage-2 translation. So
> >>> gfn
> >>> != mfn.
> >>
> >>
> >> I was not aware that the HVM/PV concept didn't apply directly to ARM.
> >> Is there a document that summarizes the way Xen's address translation
> >> works on ARM? The document I've been looking at is...
> >>
> >> https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management
> >
> >
> > The closest I would find explaining Xen's address translation scheme would
> > be my talk at Xen Developer Summit:
> >
> > https://www.slideshare.net/xen_com_mgr/keeping-coherency-on-arm-julien-grall-arm
>
> After watching your talk and reading through the "Xen ARM with
> Virtualization Extensions" whitepaper, I think I have a slightly better
> understanding of how Xen ARM handles address translations. So which sort
> of address does xenforeignmemory_map() expect?
>
> 1. stage 1 input (VA)
> 2. stage 1 output / stage 2 input (IPA)
> 3. stage 2 output (PA)
>
> (I'm assuming #1 or #2...)
#2, guest physical addresses (also called psudo-physical addresses in
the Arm manuals)
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
On Mon, Oct 30, 2017 at 6:03 PM, Stefano Stabellini
<[hidden email]> wrote:

> On Mon, 30 Oct 2017, Brett Stahlman wrote:
>> Julien,
>>
>> On Mon, Oct 30, 2017 at 1:37 PM, Julien Grall <[hidden email]> wrote:
>> > On 30/10/17 16:26, Brett Stahlman wrote:
>> >>
>> >> Hello Julien,
>> >
>> >
>> > Hello Brett,
>> >
>> >> On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]>
>> >> wrote:
>> >>>
>> >>> Hello Brett,
>> >>>
>> >>> On 27/10/2017 22:58, Brett Stahlman wrote:
>> >>>>
>> >>>>
>> >>>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
>> >>>> <[hidden email]> wrote:
>> >>>>>
>> >>>>>
>> >>>>> CC'ing the tools Maintainers and Paul
>> >>>>>
>> >>>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>> >>>>>>
>> >>>>>>
>> >>>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné
>> >>>>>> <[hidden email]>
>> >>>>>> wrote:
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> Adding the ARM maintainers.
>> >>>>>>>
>> >>>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>> >>>>>>>> memory ranges from a Xen domain. The code performing the reads is
>> >>>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though
>> >>>>>>>> I'm
>> >>>>>>>> currently testing in QEMU. I constructed a simple test program,
>> >>>>>>>> which
>> >>>>>>>> reads an arbitrary domid/address pair from the command line,
>> >>>>>>>> converts
>> >>>>>>>> the address (assumed to be physical) to a page frame number, and
>> >>>>>>>> uses
>> >>>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>> >>>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>> >>>>>>>> pointer, my attempt to dereference it fails with the following
>> >>>>>>>> error:
>> >>>>>>>>
>> >>>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>> >>>>>>>> gpa=0x00000030555000
>> >>>>>>>>
>> >>>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>> >>>>>>>> at 0x0000007f965f7000
>> >>>>>>>> Bus error
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>> >>>>>>> at traps.c:2508 to see if there's some comment explaining why this
>> >>>>>>> fault is triggered?
>> >>>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> I believe the fault is occurring because mmap() failed to map the
>> >>>>>> page.
>> >>>>>> Although xenforeignmemory_map() is indeed returning a non-NULL
>> >>>>>> pointer,
>> >>>>>> code comments indicate that this does not imply success: page-level
>> >>>>>> errors might still be returned in the provided "err" array. In my
>> >>>>>> case,
>> >>>>>> it appears that an EINVAL is produced by mmap(): specifically, I
>> >>>>>> believe
>> >>>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c),
>> >>>>>> but
>> >>>>>> there are a number of conditions that can produce this error code, and
>> >>>>>> I
>> >>>>>> haven't yet determined which is to blame...
>> >>>>>>
>> >>>>>> So although I'm not sure why I would get an "address size" fault, it
>> >>>>>> makes sense that the pointer dereference would generate some sort of
>> >>>>>> paging-related fault, given that the page mapping was unsuccessful.
>> >>>>>> Hopefully, ARM developers will be able to explain why it was
>> >>>>>> unsuccessful, or at least give me an idea of what sorts of things
>> >>>>>> could
>> >>>>>> cause a mapping attempt to fail... At this point, I'm not particular
>> >>>>>> about what address I map. I just want to be able to read known data at
>> >>>>>> a
>> >>>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>> >>>>>> myself that the page is actually mapped.
>> >>>>>
>> >>>>>
>> >>>>>
>> >>>>> The fault means "Data Abort from a lower Exception level". It could be
>> >>>>> an MMU fault or an alignment fault, according to the ARM ARM.
>> >>>>>
>> >>>>> I guess that the address range is not good. What DomU addresses are you
>> >>>>> trying to map?
>> >>>>
>> >>>>
>> >>>>
>> >>>> The intent was to map fixed "guest physical" addresses corresponding to
>> >>>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
>> >>>
>> >>>
>> >>>
>> >>> What do you mean by "zero page"? Is it the guest physical address 0? If
>> >>> so,
>> >>> the current guest memory layout does not have anything mapped at the
>> >>> address.
>> >>
>> >>
>> >> No. I didn't mean guest physical address 0, but rather the start of the
>> >> linux kernel itself: specifically, the code in head.S. IIUC, the kernel
>> >> bootstrap code responsible for decompressing the kernel typically loads
>> >> this code at a fixed address, which on x86 architectures, happens to be
>> >> 0x100000. Thus, my assumption has been that if an unmodified Linux OS
>> >> were run in an x86 Xen guest, Xen would need to map guest physical
>> >> address 0x100000 to the machine physical address where the guest Linux
>> >> kernel is actually loaded. I'd also been assuming that if code running
>> >> in dom0 wished to use the foreignmemory interface to read the first page
>> >> of such a guest's kernel, it would need to provide the "guest physical"
>> >> address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
>> >> be true for an *unmodified* guest (i.e., HVM), but having read more
>> >> about Xen's paravirtualized memory over the weekend, I'm thinking it
>> >> would not hold true for a paravirtualized (PV) guest, which doesn't have
>> >> the same concept of "guest physical" addresses.
>> >
>> >
>> > I am not x86 expert and will let x86 folks answered to that.
>> >
>> > To give the Arm64 view, the image headers allow to specify whether the
>> > kernel needs to be close to the beginning of the DRAM. But it is still not a
>> > fixed address.
>> >
>> > In practice, the toolstack will always load the Image at the ram base + text
>> > offset (specified in the kernel). But that's just for convenience and may
>> > change in the future.
>>
>> Ok. Perhaps it would help if I examined this code...
>>
>> >
>> >>
>> >>>
>> >>>> assumed that a PV guest's kernel would be loaded at a known "guest
>> >>>> physical" address (like 0x100000 on i386), and that such addresses
>> >>>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
>> >>>> suspect this was an incorrect assumption, at least for the PV case. I've
>> >>>> had trouble finding relevant documentation on the Xen site, but I did
>> >>>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
>> >>>> which IIUC, would effectively preclude the use of fixed addresses in a
>> >>>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
>> >>>> a "known" address (e.g., 0x100000 on i386).
>> >>>>
>> >>>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
>> >>>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
>> >>>> foreignmemory interface to map PV guest pages read-only, without knowing
>> >>>> in advance what, if anything, those pages represent in the guest? Or is
>> >>>> the problem that the very concept of "guest physical" doesn't exist in a
>> >>>> PV? I guess it would help if I had a better understanding of what sort
>> >>>> of frame numbers are expected by xenforeignmemory_map() when the target
>> >>>> VM is a PV. Is the Xen code the only documentation for this sort of
>> >>>> thing, or is there some place I could get a high-level overview?
>> >>>
>> >>>
>> >>>
>> >>> I am a bit confused with the rest of this e-mail. There are no concept of
>> >>> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type
>> >>> of
>> >>> guest that borrow the goods of both HVM and PV.
>> >>>
>> >>> For instance, like HVM, the hardware is used to provide a separate
>> >>> address
>> >>> space for each virtual machine. Arm calls that stage-2 translation. So
>> >>> gfn
>> >>> != mfn.
>> >>
>> >>
>> >> I was not aware that the HVM/PV concept didn't apply directly to ARM.
>> >> Is there a document that summarizes the way Xen's address translation
>> >> works on ARM? The document I've been looking at is...
>> >>
>> >> https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management
>> >
>> >
>> > The closest I would find explaining Xen's address translation scheme would
>> > be my talk at Xen Developer Summit:
>> >
>> > https://www.slideshare.net/xen_com_mgr/keeping-coherency-on-arm-julien-grall-arm
>>
>> After watching your talk and reading through the "Xen ARM with
>> Virtualization Extensions" whitepaper, I think I have a slightly better
>> understanding of how Xen ARM handles address translations. So which sort
>> of address does xenforeignmemory_map() expect?
>>
>> 1. stage 1 input (VA)
>> 2. stage 1 output / stage 2 input (IPA)
>> 3. stage 2 output (PA)
>>
>> (I'm assuming #1 or #2...)
>
> #2, guest physical addresses (also called psudo-physical addresses in
> the Arm manuals)

Ok. So if I wanted to map the first page of a guest's kernel, I could add its
"text offset" to ram base (from Julien's formula, "ram base + text offset") to
obtain the pfn to pass to xenforeignmemory_map()?

Thanks,
Brett S.

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
In reply to this post by Julien Grall-3
Julien,

On Mon, Oct 30, 2017 at 1:37 PM, Julien Grall <[hidden email]> wrote:

> On 30/10/17 16:26, Brett Stahlman wrote:
>>
>> Hello Julien,
>
>
> Hello Brett,
>
>> On Sun, Oct 29, 2017 at 3:37 PM, Julien Grall <[hidden email]>
>> wrote:
>>>
>>> Hello Brett,
>>>
>>> On 27/10/2017 22:58, Brett Stahlman wrote:
>>>>
>>>>
>>>> On Fri, Oct 27, 2017 at 3:22 PM, Stefano Stabellini
>>>> <[hidden email]> wrote:
>>>>>
>>>>>
>>>>> CC'ing the tools Maintainers and Paul
>>>>>
>>>>> On Fri, 27 Oct 2017, Brett Stahlman wrote:
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 27, 2017 at 9:31 AM, Roger Pau Monné
>>>>>> <[hidden email]>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Adding the ARM maintainers.
>>>>>>>
>>>>>>> On Wed, Oct 25, 2017 at 11:54:59AM -0500, Brett Stahlman wrote:
>>>>>>>>
>>>>>>>>
>>>>>>>> I'm trying to use the "xenforeignmemory" library to read arbitrary
>>>>>>>> memory ranges from a Xen domain. The code performing the reads is
>>>>>>>> designed to run in dom0 on a Zynq ultrascale MPSoC (ARM64), though
>>>>>>>> I'm
>>>>>>>> currently testing in QEMU. I constructed a simple test program,
>>>>>>>> which
>>>>>>>> reads an arbitrary domid/address pair from the command line,
>>>>>>>> converts
>>>>>>>> the address (assumed to be physical) to a page frame number, and
>>>>>>>> uses
>>>>>>>> xenforeignmemory_map() to map the page into the test app's virtual
>>>>>>>> memory space. Although xenforeignmemory_map() returns a non-NULL
>>>>>>>> pointer, my attempt to dereference it fails with the following
>>>>>>>> error:
>>>>>>>>
>>>>>>>> (XEN) traps.c:2508:d0v1 HSR=0x93810007 pc=0x400a20 gva=0x7f965f7000
>>>>>>>> gpa=0x00000030555000
>>>>>>>>
>>>>>>>> [   74.361735] Unhandled fault: ttbr address size fault (0x92000000)
>>>>>>>> at 0x0000007f965f7000
>>>>>>>> Bus error
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I'm not sure what a Bus error means on ARM, have you tried to look
>>>>>>> at traps.c:2508 to see if there's some comment explaining why this
>>>>>>> fault is triggered?
>>>>>>
>>>>>>
>>>>>>
>>>>>> I believe the fault is occurring because mmap() failed to map the
>>>>>> page.
>>>>>> Although xenforeignmemory_map() is indeed returning a non-NULL
>>>>>> pointer,
>>>>>> code comments indicate that this does not imply success: page-level
>>>>>> errors might still be returned in the provided "err" array. In my
>>>>>> case,
>>>>>> it appears that an EINVAL is produced by mmap(): specifically, I
>>>>>> believe
>>>>>> it's coming from privcmd_ioctl_mmap_batch() (drivers/xen/privcmd.c),
>>>>>> but
>>>>>> there are a number of conditions that can produce this error code, and
>>>>>> I
>>>>>> haven't yet determined which is to blame...
>>>>>>
>>>>>> So although I'm not sure why I would get an "address size" fault, it
>>>>>> makes sense that the pointer dereference would generate some sort of
>>>>>> paging-related fault, given that the page mapping was unsuccessful.
>>>>>> Hopefully, ARM developers will be able to explain why it was
>>>>>> unsuccessful, or at least give me an idea of what sorts of things
>>>>>> could
>>>>>> cause a mapping attempt to fail... At this point, I'm not particular
>>>>>> about what address I map. I just want to be able to read known data at
>>>>>> a
>>>>>> fixed (non-paged) address (e.g., kernel code/data), so I can prove to
>>>>>> myself that the page is actually mapped.
>>>>>
>>>>>
>>>>>
>>>>> The fault means "Data Abort from a lower Exception level". It could be
>>>>> an MMU fault or an alignment fault, according to the ARM ARM.
>>>>>
>>>>> I guess that the address range is not good. What DomU addresses are you
>>>>> trying to map?
>>>>
>>>>
>>>>
>>>> The intent was to map fixed "guest physical" addresses corresponding to
>>>> (e.g) the "zero page" of a guest's running kernel. Up until today, I'd
>>>
>>>
>>>
>>> What do you mean by "zero page"? Is it the guest physical address 0? If
>>> so,
>>> the current guest memory layout does not have anything mapped at the
>>> address.
>>
>>
>> No. I didn't mean guest physical address 0, but rather the start of the
>> linux kernel itself: specifically, the code in head.S. IIUC, the kernel
>> bootstrap code responsible for decompressing the kernel typically loads
>> this code at a fixed address, which on x86 architectures, happens to be
>> 0x100000. Thus, my assumption has been that if an unmodified Linux OS
>> were run in an x86 Xen guest, Xen would need to map guest physical
>> address 0x100000 to the machine physical address where the guest Linux
>> kernel is actually loaded. I'd also been assuming that if code running
>> in dom0 wished to use the foreignmemory interface to read the first page
>> of such a guest's kernel, it would need to provide the "guest physical"
>> address 0x100000 to xenforeignmemory_map(). I'm still thinking this may
>> be true for an *unmodified* guest (i.e., HVM), but having read more
>> about Xen's paravirtualized memory over the weekend, I'm thinking it
>> would not hold true for a paravirtualized (PV) guest, which doesn't have
>> the same concept of "guest physical" addresses.
>
>
> I am not x86 expert and will let x86 folks answered to that.
>
> To give the Arm64 view, the image headers allow to specify whether the
> kernel needs to be close to the beginning of the DRAM. But it is still not a
> fixed address.
>
> In practice, the toolstack will always load the Image at the ram base + text
> offset (specified in the kernel). But that's just for convenience and may
> change in the future.

Ok. So I'm thinking the relevant address would be 0xffffffc000080000,
which is where System.map places the kernel's "_text" segment. IIUC,
aarch64 Linux uses 39-bit addresses in a range controlled by TTBR1,
beginning at 0xffffff8000000000. I'm assuming the kernel's address
space uses a direct, 1-to-1 mapping (i.e., guest virtual == guest
physical). Correct? Also, should a signed or unsigned right shift be
used to convert a kernel address to a pfn? I.e., would the pfn
corresponding to 0xffffffc000080000 be 0xfffffffffc000080 or
0x000ffffffc000080?

Thanks,
Brett S.

>
>>
>>>
>>>> assumed that a PV guest's kernel would be loaded at a known "guest
>>>> physical" address (like 0x100000 on i386), and that such addresses
>>>> corresponded to the gfn's expected by xenforeignmemory_map(). But now I
>>>> suspect this was an incorrect assumption, at least for the PV case. I've
>>>> had trouble finding relevant documentation on the Xen site, but I did
>>>> find a presentation earlier today suggesting that for PV's, gfn == mfn,
>>>> which IIUC, would effectively preclude the use of fixed addresses in a
>>>> PV guest. IOW, unlike an HVM's kernel, a PV's kernel cannot be loaded at
>>>> a "known" address (e.g., 0x100000 on i386).
>>>>
>>>> Perhaps my use case (reading a guest kernel's code/data from dom0) makes
>>>> sense for an HVM, but not a PV? Is it not possible for dom0 to use the
>>>> foreignmemory interface to map PV guest pages read-only, without knowing
>>>> in advance what, if anything, those pages represent in the guest? Or is
>>>> the problem that the very concept of "guest physical" doesn't exist in a
>>>> PV? I guess it would help if I had a better understanding of what sort
>>>> of frame numbers are expected by xenforeignmemory_map() when the target
>>>> VM is a PV. Is the Xen code the only documentation for this sort of
>>>> thing, or is there some place I could get a high-level overview?
>>>
>>>
>>>
>>> I am a bit confused with the rest of this e-mail. There are no concept of
>>> HVM or PV on Arm. This is x86 specific. For Arm, there is a single type
>>> of
>>> guest that borrow the goods of both HVM and PV.
>>>
>>> For instance, like HVM, the hardware is used to provide a separate
>>> address
>>> space for each virtual machine. Arm calls that stage-2 translation. So
>>> gfn
>>> != mfn.
>>
>>
>> I was not aware that the HVM/PV concept didn't apply directly to ARM.
>> Is there a document that summarizes the way Xen's address translation
>> works on ARM? The document I've been looking at is...
>>
>> https://wiki.xen.org/wiki/X86_Paravirtualised_Memory_Management
>
>
> The closest I would find explaining Xen's address translation scheme would
> be my talk at Xen Developer Summit:
>
> https://www.slideshare.net/xen_com_mgr/keeping-coherency-on-arm-julien-grall-arm
>
> The address translation is very simple and only follow the scheme introduced
> by the Arm Arm.
>
>>
>> ...but I haven't found anything analogous for ARM. At any rate, if the
>> ARM hardware is providing a separate address space for each VM, then I
>> suppose the concept of "guest physical" addresses is still valid. Does a
>> guest physical address correspond to the output of stage-1 translation?
>
>
> Yes.
>
>> And are guest kernels on ARM generally loaded at fixed addresses (like
>> 0x100000 in the x86 case), or is the kernel load address determined
>> dynamically?
>
>
> See my answer above.
>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Julien Grall-3
Hi,

On 31/10/2017 19:17, Brett Stahlman wrote:
> Ok. So I'm thinking the relevant address would be 0xffffffc000080000,
> which is where System.map places the kernel's "_text" segment. IIUC,
> aarch64 Linux uses 39-bit addresses in a range controlled by TTBR1,
> beginning at 0xffffff8000000000. I'm assuming the kernel's address
> space uses a direct, 1-to-1 mapping (i.e., guest virtual == guest
> physical). Correct? Also, should a signed or unsigned right shift be
> used to convert a kernel address to a pfn? I.e., would the pfn
> corresponding to 0xffffffc000080000 be 0xfffffffffc000080 or
> 0x000ffffffc000080?

The kernel does not direct map the memory (e.g guest virtual
!= guest physical). 0xffffffc000080000 would be virtual
address of the start of the kernel.

I don't think Linux is moving the kernel within the physical
address space after boot. Assuming you don't use UEFI in
the guest and you are using Xen 4.6 or later, the kernel
will be loaded at the guest physical address 0x40008000.

You can also find this information via xl create when it
is in verbose mode (xl -vvv create):

domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x40008+0x7b4 at 0xffff94696000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x40008000 -> 0x407bc000  (pfn 0x40008 + 0x7b4 pages)
domainbuilder: detail: xc_dom_load_zimage_kernel: called
domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x40008000-0x407bc000

Cheers,

--
Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Error accessing memory mapped by xenforeignmemory_map()

Brett Stahlman
Julien,

On Tue, Oct 31, 2017 at 3:44 PM, Julien Grall <[hidden email]> wrote:

> Hi,
>
> On 31/10/2017 19:17, Brett Stahlman wrote:
>> Ok. So I'm thinking the relevant address would be 0xffffffc000080000,
>> which is where System.map places the kernel's "_text" segment. IIUC,
>> aarch64 Linux uses 39-bit addresses in a range controlled by TTBR1,
>> beginning at 0xffffff8000000000. I'm assuming the kernel's address
>> space uses a direct, 1-to-1 mapping (i.e., guest virtual == guest
>> physical). Correct? Also, should a signed or unsigned right shift be
>> used to convert a kernel address to a pfn? I.e., would the pfn
>> corresponding to 0xffffffc000080000 be 0xfffffffffc000080 or
>> 0x000ffffffc000080?
>
> The kernel does not direct map the memory (e.g guest virtual
> != guest physical). 0xffffffc000080000 would be virtual
> address of the start of the kernel.
>
> I don't think Linux is moving the kernel within the physical
> address space after boot. Assuming you don't use UEFI in
> the guest and you are using Xen 4.6 or later, the kernel
> will be loaded at the guest physical address 0x40008000.
>
> You can also find this information via xl create when it
> is in verbose mode (xl -vvv create):
>
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x40008+0x7b4 at 0xffff94696000
> domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x40008000 -> 0x407bc000  (pfn 0x40008 + 0x7b4 pages)
> domainbuilder: detail: xc_dom_load_zimage_kernel: called
> domainbuilder: detail: xc_dom_load_zimage_kernel: kernel seg 0x40008000-0x407bc000

Bingo! Using this approach, I was finally able to map my guest kernel's
head page (at 0x40008000) and read the expected data. I appreciate your
patience...

Sincerely,
Brett S.

>
> Cheers,
>
> --
> Julien Grall

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users