Xen dom0 vCPUs pinning for PV/HVM domUs

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Xen dom0 vCPUs pinning for PV/HVM domUs

John Naggets
Hello,

I am running Xen 4.9 on Unbutu 17.10 with a mix of PV and HVM domUs.
Now on the dom0 side I have pinned 2 vCPUs and reserved 4 GB as I am
used to do that in a pure PV domU environment (no HVM).

Now I was wondering if this is still valid when running a mix of PV
and HVM domUs? or is this recommendations of pinning and reserving
memory for the dom0 only valid when running PV domUs?

These are the Xen settings I use in my /etc/default/grub file:

GRUB_CMDLINE_XEN="dom0_mem=4G,max:4G dom0_max_vcpus=2 dom0_vcpus_pin"

Thanks for the hints.

Best regards,
John

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

Hans van Kranenburg-2
On 02/12/2018 10:42 PM, John Naggets wrote:

> Hello,
>
> I am running Xen 4.9 on Unbutu 17.10 with a mix of PV and HVM domUs.
> Now on the dom0 side I have pinned 2 vCPUs and reserved 4 GB as I am
> used to do that in a pure PV domU environment (no HVM).
>
> Now I was wondering if this is still valid when running a mix of PV
> and HVM domUs? or is this recommendations of pinning and reserving
> memory for the dom0 only valid when running PV domUs?
>
> These are the Xen settings I use in my /etc/default/grub file:
>
> GRUB_CMDLINE_XEN="dom0_mem=4G,max:4G dom0_max_vcpus=2 dom0_vcpus_pin"

The dom0_mem=4G,max:4G is always a good idea to prevent the golden age
of ballooning from happening in your dom0.

I have dom0_max_vcpus=4 myself (servers have 16 or 20 cpu cores) and no
dom0_vcpus_pin, but I use...

  xl sched-credit -d 0 -w 32767

...to give the dom0 a ridiculous high priority if it requests cpu time,
to make sure it gets scheduled first.

https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance#Dom0_vCPUs

Hans

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

John Naggets
Thanks Hans for your interesting comments.

I forgot to mention that I also have added in my /etc/rc.local the
following to be on the safe side:

/usr/sbin/xl sched-credit -d 0 -w 512

So if I understand here correctly this is a fine config for mixed HVM
and PV domUs ensuring enough resources to the dom0?

I am using a Intel Xeon Silver 4110 CPU @ 2.10GHz which has 8 cores
and 16 threads.

On Mon, Feb 12, 2018 at 11:00 PM, Hans van Kranenburg <[hidden email]> wrote:

> On 02/12/2018 10:42 PM, John Naggets wrote:
>> Hello,
>>
>> I am running Xen 4.9 on Unbutu 17.10 with a mix of PV and HVM domUs.
>> Now on the dom0 side I have pinned 2 vCPUs and reserved 4 GB as I am
>> used to do that in a pure PV domU environment (no HVM).
>>
>> Now I was wondering if this is still valid when running a mix of PV
>> and HVM domUs? or is this recommendations of pinning and reserving
>> memory for the dom0 only valid when running PV domUs?
>>
>> These are the Xen settings I use in my /etc/default/grub file:
>>
>> GRUB_CMDLINE_XEN="dom0_mem=4G,max:4G dom0_max_vcpus=2 dom0_vcpus_pin"
>
> The dom0_mem=4G,max:4G is always a good idea to prevent the golden age
> of ballooning from happening in your dom0.
>
> I have dom0_max_vcpus=4 myself (servers have 16 or 20 cpu cores) and no
> dom0_vcpus_pin, but I use...
>
>   xl sched-credit -d 0 -w 32767
>
> ...to give the dom0 a ridiculous high priority if it requests cpu time,
> to make sure it gets scheduled first.
>
> https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance#Dom0_vCPUs
>
> Hans

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

Dario Faggioli-4
On Tue, 2018-02-13 at 15:55 +0100, John Naggets wrote:

> Thanks Hans for your interesting comments.
>
> I forgot to mention that I also have added in my /etc/rc.local the
> following to be on the safe side:
>
> /usr/sbin/xl sched-credit -d 0 -w 512
>
> So if I understand here correctly this is a fine config for mixed HVM
> and PV domUs ensuring enough resources to the dom0?
>
Pinning of vCPUs, either of dom0 or domUs, is completely orthogonal to
what kind of guest is used (for domUs, of course).

Whether pinning or not is helpful, is highly dependent on the actual
workload, but in general, using dom_vcpus_pin is discouraged (I find
the parameter itself awkward and confusing).

In fact, in case what you want is to reserve some vCPUs to dom0 and
dom0 only, using dom0_vcpus_pin _DOES_NOT_ do that.

Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

John Naggets
Thank you Dario for your insight. Indeed I now find the
dom0_vcpus_ping really confusing. In the wiki documentation it says:

"Another interesting approach is pinning Dom0 vCPUs to physical CPUs,
this can be done by adding dom0_vcpus_pin to the Xen command line.
Then once Dom0 has booted you can see to which CPUs the vCPUs have
been pinned and exclude other domains from running on those CPUs."

Source: https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance#Dom0_vCPUs

I think then this documentation should be adapted to reflect the
reality and not inducing people into confusion.

On Wed, Feb 14, 2018 at 3:01 PM, Dario Faggioli <[hidden email]> wrote:

> On Tue, 2018-02-13 at 15:55 +0100, John Naggets wrote:
>> Thanks Hans for your interesting comments.
>>
>> I forgot to mention that I also have added in my /etc/rc.local the
>> following to be on the safe side:
>>
>> /usr/sbin/xl sched-credit -d 0 -w 512
>>
>> So if I understand here correctly this is a fine config for mixed HVM
>> and PV domUs ensuring enough resources to the dom0?
>>
> Pinning of vCPUs, either of dom0 or domUs, is completely orthogonal to
> what kind of guest is used (for domUs, of course).
>
> Whether pinning or not is helpful, is highly dependent on the actual
> workload, but in general, using dom_vcpus_pin is discouraged (I find
> the parameter itself awkward and confusing).
>
> In fact, in case what you want is to reserve some vCPUs to dom0 and
> dom0 only, using dom0_vcpus_pin _DOES_NOT_ do that.
>
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Software Engineer @ SUSE https://www.suse.com/

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

Dario Faggioli-4
On Wed, 2018-02-14 at 15:37 +0100, John Naggets wrote:

> Thank you Dario for your insight. Indeed I now find the
> dom0_vcpus_ping really confusing. In the wiki documentation it says:
>
> "Another interesting approach is pinning Dom0 vCPUs to physical CPUs,
> this can be done by adding dom0_vcpus_pin to the Xen command line.
> Then once Dom0 has booted you can see to which CPUs the vCPUs have
> been pinned and exclude other domains from running on those CPUs."
>
> Source: https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance#D
> om0_vCPUs
>
> I think then this documentation should be adapted to reflect the
> reality and not inducing people into confusion.
>
What is said there is correct.

And, in the following paragraph, it is also said how to actually
achieve dom0/domU pCPUs isolation. What I don't like about the
dom0_vcpus_pin is:
- it's very inflexible. I.e., it does not allow to say anything like
  "pin dom0's vCPUs to pCPUs x,y,z-w". Instead, it will always pin
  d0v0 to pCPU 0, d0v1 to pCPU 1, d0v2 to pCPU2, etc;
- IIRC, if you boot with dom0_vcpus_pin, you can't change dom0 vCPUs
  affinity (and I never understood why this has been made to be the
  case...)
- people may see it and tend to think that, just by using it, they'll
  achieve dom0 isolation, without bothering going reading that page,
  and actually finding out that they also need to put something in
  _all_ their domUs' configs;
- I do find it very impractical to have to put something in _all_ the
  domUs' configs, and although I appreciate that someone may be
  willing to do that, and am fine with it, I think we should not
  encourage doing it.

So, although that page does reflect the reality, I agree that it could
be restructured a bit.

Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/
_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users

signature.asc (849 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Xen dom0 vCPUs pinning for PV/HVM domUs

Hans van Kranenburg-2
On 02/14/2018 03:49 PM, Dario Faggioli wrote:

> On Wed, 2018-02-14 at 15:37 +0100, John Naggets wrote:
>> Thank you Dario for your insight. Indeed I now find the
>> dom0_vcpus_ping really confusing. In the wiki documentation it says:
>>
>> "Another interesting approach is pinning Dom0 vCPUs to physical CPUs,
>> this can be done by adding dom0_vcpus_pin to the Xen command line.
>> Then once Dom0 has booted you can see to which CPUs the vCPUs have
>> been pinned and exclude other domains from running on those CPUs."
>>
>> Source: https://wiki.xenproject.org/wiki/Tuning_Xen_for_Performance#D
>> om0_vCPUs
>>
>> I think then this documentation should be adapted to reflect the
>> reality and not inducing people into confusion.
>>
> What is said there is correct.
>
> And, in the following paragraph, it is also said how to actually
> achieve dom0/domU pCPUs isolation. What I don't like about the
> dom0_vcpus_pin is:
> - it's very inflexible. I.e., it does not allow to say anything like
>   "pin dom0's vCPUs to pCPUs x,y,z-w". Instead, it will always pin
>   d0v0 to pCPU 0, d0v1 to pCPU 1, d0v2 to pCPU2, etc;
> - IIRC, if you boot with dom0_vcpus_pin, you can't change dom0 vCPUs
>   affinity (and I never understood why this has been made to be the
>   case...)
> - people may see it and tend to think that, just by using it, they'll
>   achieve dom0 isolation, without bothering going reading that page,
>   and actually finding out that they also need to put something in
>   _all_ their domUs' configs;
> - I do find it very impractical to have to put something in _all_ the
>   domUs' configs, and although I appreciate that someone may be
>   willing to do that, and am fine with it, I think we should not
>   encourage doing it.
>
> So, although that page does reflect the reality, I agree that it could
> be restructured a bit.

I wonder what the actual thoughts behind recommending it are.

I've been using the alternative, the absurd high scheduler priority for
dom0 for years (after the first time of running into a situation where I
couldn't access a dom0 because guests were exploding).

I've never ran into that problematic situation again afterwards, it does
not need all the pinning config for domUs and it also doesn't keep cpus
idle when dom0 does not need the cpu time.

Hans

_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xenproject.org/mailman/listinfo/xen-users