Quantcast

Xen common hardware configuration

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Xen common hardware configuration

Marin Cosmin
Hi,

Could somebody point me what is/are most common hardware configurations for a server running Xen with a bunch of domains with respect to memory, CPU, and CPU cache ? I am interested in finding out whether it would be possible to do some optimizations with respect to memory that is allocated to a particular domain.

Thanks,
Cosmin


_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Xen common hardware configuration

Simon Hobson-2
Marin Cosmin <[hidden email]> wrote:

> Could somebody point me what is/are most common hardware configurations for a server running Xen with a bunch of domains with respect to memory, CPU, and CPU cache ? I am interested in finding out whether it would be possible to do some optimizations with respect to memory that is allocated to a particular domain.

I doubt if there is such a list - there are so many variations of workload that I doubt there are many systems that share anything in common with others.

In terms of memory (and disk), that's the easiest to answer - you need enough for Dom0, plus enough for each of your DomUs, and a bit for overheads. So if you know how much each DomU needs, add them all up, add enough for the Dom0, and round up - that's your *minimum*. If you don't know what your workloads will be in the future, then you stick your finger in the air and guess ;-) OK, that's an exaggeration - you do the totting up for what you do know, and then you have to guestimate how much it will increase over the period you need to provision for.

You can complicate thing with ballooning. I tend to set guests up so the configured memory is higher than what they normally run with - eg a guest might be configured to have (say) 4G max mem, but start with that ballooned down to 3G, giving me a bit more flexibility if I need to adjust a running system.

AIUI there are methods for having dynamic memory allocations, but unless you **REALLY** understand your systems and how they behave, then there is great scope for creating trouble if several systems all want this dynamically shared memory at the same time !

Disk space is really just the same - and again, beware of dynamic disks (auto-expanding disks). Many of the problems I've witnessed with virtualised environments (on Windows systems managed by colleagues) are due to expending disks, overcommitted space, and guests suddenly deciding they need more space until the host runs out.


CPU isn't that much different - work out how many cores each guest needs in order to provide the required performance under peak load, and add them all up.
There's probably much more scope for applying diversity of demand in that CPU cores are readily shared unlike memory and disk. But again, you need to understand your workloads - so no use sharing (say) 4 cores between two guests that need 4 cores each if they are going to need 4 cores each at the same time as each other, but it is OK if you know that one is going to be idle(ish) when the other is busy.


_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: Xen common hardware configuration

Marin Cosmin
In reply to this post by Marin Cosmin



From: "[hidden email]" <[hidden email]>
To: [hidden email]
Sent: Monday, May 8, 2017 12:02 PM
Subject: Xen-users Digest, Vol 147, Issue 8

Send Xen-users mailing list submissions to

To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to

You can reach the person managing the list at

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Xen-users digest..."


Today's Topics:

  1. Re: Xen common hardware configuration (Simon Hobson)
  2. Re: XEN_DOMCTL_assign_device: ... failed (-12) (John Connett)


----------------------------------------------------------------------

Message: 1
Date: Sun, 7 May 2017 15:49:24 +0100
From: Simon Hobson <[hidden email]>
Subject: Re: [Xen-users] Xen common hardware configuration
Message-ID: <[hidden email]>
Content-Type: text/plain; charset=us-ascii

Marin Cosmin <[hidden email]> wrote:

> Could somebody point me what is/are most common hardware configurations for a server running Xen with a bunch of domains with respect to memory, CPU, and CPU cache ? I am interested in finding out whether it would be possible to do some optimizations with respect to memory that is allocated to a particular domain.

I doubt if there is such a list - there are so many variations of workload that I doubt there are many systems that share anything in common with others.

In terms of memory (and disk), that's the easiest to answer - you need enough for Dom0, plus enough for each of your DomUs, and a bit for overheads. So if you know how much each DomU needs, add them all up, add enough for the Dom0, and round up - that's your *minimum*. If you don't know what your workloads will be in the future, then you stick your finger in the air and guess ;-) OK, that's an exaggeration - you do the totting up for what you do know, and then you have to guestimate how much it will increase over the period you need to provision for.

You can complicate thing with ballooning. I tend to set guests up so the configured memory is higher than what they normally run with - eg a guest might be configured to have (say) 4G max mem, but start with that ballooned down to 3G, giving me a bit more flexibility if I need to adjust a running system.

AIUI there are methods for having dynamic memory allocations, but unless you **REALLY** understand your systems and how they behave, then there is great scope for creating trouble if several systems all want this dynamically shared memory at the same time !

Disk space is really just the same - and again, beware of dynamic disks (auto-expanding disks). Many of the problems I've witnessed with virtualised environments (on Windows systems managed by colleagues) are due to expending disks, overcommitted space, and guests suddenly deciding they need more space until the host runs out.


CPU isn't that much different - work out how many cores each guest needs in order to provide the required performance under peak load, and add them all up.
There's probably much more scope for applying diversity of demand in that CPU cores are readily shared unlike memory and disk. But again, you need to understand your workloads - so no use sharing (say) 4 cores between two guests that need 4 cores each if they are going to need 4 cores each at the same time as each other, but it is OK if you know that one is going to be idle(ish) when the other is busy.

Thanks a lot Simon for your detailed response.
Meanwhile I searched for last generation CPU server configuration from Intel(don't know what is the most popular/common CPU configuration Xen is running on) and I saw that speed is between 1.33GHz and 3.2GHz, from 4 up to 24 cores and the amount of LLC varies from 10MB up to 60MB(high end). Using Intel CAT technology and larger amount of cache better performance can be achieved.


------------------------------

Message: 2
Date: Mon, 8 May 2017 11:00:47 +0100
From: John Connett <[hidden email]>
Subject: Re: [Xen-users] XEN_DOMCTL_assign_device: ... failed (-12)
Message-ID: <[hidden email]>
Content-Type: text/plain; charset="utf-8"

On 02/05/17 23:21, John Connett wrote:
> [This sender failed our fraud detection checks and may not be who they appear to be. Learn about spoofing at http://aka.ms/LearnAboutSpoofing]
>
> On 02/05/17 17:35, George Dunlap wrote:
>> On Tue, May 2, 2017 at 4:11 PM, John Connett <[hidden email]> wrote:
>>> I'm trying to test OPNsense 17.1 (based on FreeBSD 11) in a Xen x86_64
>>> domU with dom0 openSUSE Tumbleweed / xen-4.8.0. Installation and update
>>> went smoothly and I can access the console and web interface.
>>>
>>> My plan is to have the WAN bridged to dom0 and the LAN to use PCI pass
>>> through to an old Ralink corp. RT2500 Wireless 802.11bg (rev 01). It is
>>> in the PCI assignable list:
>>>
>>> # lspci | fgrep Ralink
>>> 03:00.0 Network controller: Ralink corp. RT2500 Wireless 802.11bg (rev 01)
>>> # xl pci-assignable-list
>>> 0000:03:00.0
>>> #
>>>
>>> However, when I use Add Hardware to add the PCI Host Device the VM then
>>> fails to start:
>>>
>>> # cat /var/log/xen/qemu-dm-opnsense.log
>>> char device redirected to /dev/pts/8 (label serial0)
>>> qemi-system-i386: terminating on signal 1 from pid 2868 (/usr/sbin/libvirtd)
>>> #
>>>
>>> There is also a console message:
>>>
>>> XEN_DOMCTL_assign_device: assign 000:3:00.0 to dom4 failed (-12)
>>
>> When reporting a bug, please always at least include the output of "xl
>> dmesg" and "dmesg" (or /var/log/messages, or whatever).
>>
>> Error -12 is "Not enough memory" (or "Couldn't allocate memory"); it
>> would be good to know what might have gone into that.
>>
>> Thanks,
>>  -George
>>
>
> I kept the original message brief in case it wasn't a bug but my mistake.
>
> Output of "xl dmesg" and "dmesg" attached. Please let me know if you
> need any further information.
>
> Many thanks
> --
> John

Logged as a bug against openSUSE Tumbleweed:

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 313 bytes
Desc: OpenPGP digital signature

------------------------------

Subject: Digest Footer

_______________________________________________
Xen-users mailing list


------------------------------

End of Xen-users Digest, Vol 147, Issue 8
*****************************************



_______________________________________________
Xen-users mailing list
[hidden email]
https://lists.xen.org/xen-users
Loading...