[PATCH] qemu/xendisk: set maximum number of grants to be used

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

[PATCH] qemu/xendisk: set maximum number of grants to be used

Jan Beulich-2
Legacy (non-pvops) gntdev drivers may require this to be done when the
number of grants intended to be used simultaneously exceeds a certain
driver specific default limit.

Signed-off-by: Jan Beulich <[hidden email]>

--- a/hw/xen_disk.c
+++ b/hw/xen_disk.c
@@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
     if (xen_mode != XEN_EMULATE) {
         batch_maps = 1;
     }
+    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
+            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
+        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
+                      strerror(errno));
 }
 
 static int blk_init(struct XenDevice *xendev)




_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xen.org/xen-devel

qemu-xendisk-set-max-grants.patch (812 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] qemu/xendisk: set maximum number of grants to be used

Jan Beulich-2
>>> On 11.05.12 at 09:19, "Jan Beulich" <[hidden email]> wrote:
> Legacy (non-pvops) gntdev drivers may require this to be done when the
> number of grants intended to be used simultaneously exceeds a certain
> driver specific default limit.
>
> Signed-off-by: Jan Beulich <[hidden email]>
>
> --- a/hw/xen_disk.c
> +++ b/hw/xen_disk.c
> @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>      if (xen_mode != XEN_EMULATE) {
>          batch_maps = 1;
>      }
> +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)

In more extensive testing it appears that very rarely this value is still
too low:

xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)

342 + 11 = 353 > 352 = 32 * 11

Could someone help out here? I first thought this might be due to
use_aio being non-zero, but ioreq_start() doesn't permit more than
max_requests struct ioreqs-s to be around.

Additionally, shouldn't the driver be smarter and gracefully handle
grant mapping failures (as the per-domain map track table in the
hypervisor is a finite resource)?

Jan

> +        xen_be_printf(xendev, 0, "xc_gnttab_set_max_grants failed: %s\n",
> +                      strerror(errno));
>  }
>  
>  static int blk_init(struct XenDevice *xendev)




_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xen.org/xen-devel
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] qemu/xendisk: set maximum number of grants to be used

Stefano Stabellini-3
On Fri, 11 May 2012, Jan Beulich wrote:

> >>> On 11.05.12 at 09:19, "Jan Beulich" <[hidden email]> wrote:
> > Legacy (non-pvops) gntdev drivers may require this to be done when the
> > number of grants intended to be used simultaneously exceeds a certain
> > driver specific default limit.
> >
> > Signed-off-by: Jan Beulich <[hidden email]>
> >
> > --- a/hw/xen_disk.c
> > +++ b/hw/xen_disk.c
> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
> >      if (xen_mode != XEN_EMULATE) {
> >          batch_maps = 1;
> >      }
> > +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
> > +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>
> In more extensive testing it appears that very rarely this value is still
> too low:
>
> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>
> 342 + 11 = 353 > 352 = 32 * 11
>
> Could someone help out here? I first thought this might be due to
> use_aio being non-zero, but ioreq_start() doesn't permit more than
> max_requests struct ioreqs-s to be around.

Actually 342 + 11 = 353, that should be still OK because it is equal to
32 * 11 + 1, where the additional 1 is for the ring, right?


> Additionally, shouldn't the driver be smarter and gracefully handle
> grant mapping failures (as the per-domain map track table in the
> hypervisor is a finite resource)?

yes, probably

_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xen.org/xen-devel
Reply | Threaded
Open this post in threaded view
|

Re: [PATCH] qemu/xendisk: set maximum number of grants to be used

Jan Beulich-2
>>> On 11.05.12 at 19:07, Stefano Stabellini <[hidden email]>
wrote:

> On Fri, 11 May 2012, Jan Beulich wrote:
>> >>> On 11.05.12 at 09:19, "Jan Beulich" <[hidden email]> wrote:
>> > Legacy (non-pvops) gntdev drivers may require this to be done when the
>> > number of grants intended to be used simultaneously exceeds a certain
>> > driver specific default limit.
>> >
>> > Signed-off-by: Jan Beulich <[hidden email]>
>> >
>> > --- a/hw/xen_disk.c
>> > +++ b/hw/xen_disk.c
>> > @@ -536,6 +536,10 @@ static void blk_alloc(struct XenDevice *
>> >      if (xen_mode != XEN_EMULATE) {
>> >          batch_maps = 1;
>> >      }
>> > +    if (xc_gnttab_set_max_grants(xendev->gnttabdev,
>> > +            max_requests * BLKIF_MAX_SEGMENTS_PER_REQUEST + 1) < 0)
>>
>> In more extensive testing it appears that very rarely this value is still
>> too low:
>>
>> xen be: qdisk-768: can't map 11 grant refs (Cannot allocate memory, 342 maps)
>>
>> 342 + 11 = 353 > 352 = 32 * 11
>>
>> Could someone help out here? I first thought this might be due to
>> use_aio being non-zero, but ioreq_start() doesn't permit more than
>> max_requests struct ioreqs-s to be around.
>
> Actually 342 + 11 = 353, that should be still OK because it is equal to
> 32 * 11 + 1, where the additional 1 is for the ring, right?

The +1 is for the ring, yes. And the calculation in the driver actually
appears to be fine. It's rather an issue with fragmentation afaict -
the driver needs to allocate 11 contiguous slots, and such may not
be available. I'll send out a v2 of the patch soon, taking fragmentation
into account.

Jan


_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xen.org/xen-devel