Network performance - sending from VM to VM using TCP

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Network performance - sending from VM to VM using TCP

Cherie Cheung
Hi,

I have been simulating a network using dummynet and evaluating it
using netperf. Xen3.0-unstable is used and the VMs are
vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
Using netperf, I sent data using TCP from domain-0 of machine 1 to
domain-0 of machine 2. Then I repeat the experiment, but this time
from VM-1 of machine 1 to VM-1 of machine 2.

However, the performance across the two VMs is substantially worse
than that across domain-0. Here's the result:

FROM VM to VM:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
(172.19.222.210) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  65536  65536    80.28      24.83


FROM domain-0 to domain-0:
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
(137.110.222.236) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  65536  65536    80.11     280.62

Here's the setting of the network buffer:

net.core.wmem_max = 8388608
net.core.rmem_max = 8388608
net.ipv4.tcp_bic = 1
net.ipv4.tcp_rmem = 4096        87380   8388608
net.ipv4.tcp_wmem = 4096        65536   8388608

Does anyone know why the performance across two VMs is so bad? Any fix
to it? Thank you.

Cherie

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Kip Macy-2
Are you using FreeBSD or Linux?

On Thu, 26 May 2005, Cherie Cheung wrote:

> Hi,
>
> I have been simulating a network using dummynet and evaluating it
> using netperf. Xen3.0-unstable is used and the VMs are
> vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine 1 to
> domain-0 of machine 2. Then I repeat the experiment, but this time
> from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially worse
> than that across domain-0. Here's the result:
>
> FROM VM to VM:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> (172.19.222.210) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.28      24.83
>
>
> FROM domain-0 to domain-0:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> (137.110.222.236) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.11     280.62
>
> Here's the setting of the network buffer:
>
> net.core.wmem_max = 8388608
> net.core.rmem_max = 8388608
> net.ipv4.tcp_bic = 1
> net.ipv4.tcp_rmem = 4096        87380   8388608
> net.ipv4.tcp_wmem = 4096        65536   8388608
>
> Does anyone know why the performance across two VMs is so bad? Any fix
> to it? Thank you.
>
> Cherie
>
> _______________________________________________
> Xen-devel mailing list
> [hidden email]
> http://lists.xensource.com/xen-devel
>

--
"I will not be pushed, filed, stamped, indexed, briefed, debriefed or numbered.
My life is my own."

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Nivedita Singhvi
In reply to this post by Cherie Cheung
Cherie Cheung wrote:
> Hi,
>
> I have been simulating a network using dummynet and evaluating it

I haven't played with dummynet and don't know if there are
additional issues inherent in using dummynet itself...

> using netperf. Xen3.0-unstable is used and the VMs are
> vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine 1 to
> domain-0 of machine 2. Then I repeat the experiment, but this time
> from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially worse
> than that across domain-0. Here's the result:
>
> FROM VM to VM:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> (172.19.222.210) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.28      24.83

Your send message size is exactly your socket size. It is also
the size of the default write buffer. The kernel uses half the
buffer (very roughly) for data

Were you testing with 65536 bytes exactly for some reason?
This is stop and go traffic and normally the kernel doesn't
use the entire buffer to store data - it's roughly half...

Could you test with different send sizes?

> FROM domain-0 to domain-0:
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> (137.110.222.236) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.11     280.62
>
> Here's the setting of the network buffer:
>
> net.core.wmem_max = 8388608
> net.core.rmem_max = 8388608
> net.ipv4.tcp_bic = 1
> net.ipv4.tcp_rmem = 4096        87380   8388608
> net.ipv4.tcp_wmem = 4096        65536   8388608
>
> Does anyone know why the performance across two VMs is so bad? Any fix
> to it? Thank you.

If you just want to improve your peformance, increase your
buffer sizes!

For example:
tcp_rmem = 4096 1398080 8388608
tcp_wmem = 4096 1398080 8388608

Were you seeing losses, queue overflows?

More importantly, how much memory do you have in the system and
how were you allocating it?


thanks,
Nivedita

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Cherie Cheung
Hi,

Thanks for answering me. Here's what I have:

> Were you testing with 65536 bytes exactly for some reason?
> This is stop and go traffic and normally the kernel doesn't
> use the entire buffer to store data - it's roughly half...
>
> Could you test with different send sizes?

No special reason for that. What do you mean by kernel doesn't use the
entire buffer to store the data? I have tried different send size. It
doesn't make any noticable difference.

> If you just want to improve your peformance, increase your
> buffer sizes!
>
> For example:
> tcp_rmem = 4096 1398080 8388608
> tcp_wmem = 4096 1398080 8388608

The performance only improved a little.

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
(172.19.222.215) port 0 AF_INET
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

1398080 1398080 1398080    80.39      26.55  

can't compare with that of domain0 to domain0.

> Were you seeing losses, queue overflows?
how to check that?

> More importantly, how much memory do you have in the system and
> how were you allocating it?
it said 127MB in sudo xm list

is it really the problem with the buffer size and send size? domain0
can achieve such good performance under the same settings. Is the
bottleneck related to the overhead in the VM that causes the problem?

also, I had performed some more tests:
with bandwidth 150Mbit/s and RTT 40ms

domain0 to domain0
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  65536  65536    80.17     135.01  
 
vm to vm
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

 87380  65536  65536    80.55     134.80  

under these setting, VM to VM performed as good as domain0 to domain0.
if I increased or decreased the BDP, the performance dropped again.

Any idea what is causing the problem?

Thanks.

Cherie



On 5/26/05, Nivedita Singhvi <[hidden email]> wrote:

> Cherie Cheung wrote:
> > Hi,
> >
> > I have been simulating a network using dummynet and evaluating it
>
> I haven't played with dummynet and don't know if there are
> additional issues inherent in using dummynet itself...
>
> > using netperf. Xen3.0-unstable is used and the VMs are
> > vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> > Using netperf, I sent data using TCP from domain-0 of machine 1 to
> > domain-0 of machine 2. Then I repeat the experiment, but this time
> > from VM-1 of machine 1 to VM-1 of machine 2.
> >
> > However, the performance across the two VMs is substantially worse
> > than that across domain-0. Here's the result:
> >
> > FROM VM to VM:
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> > (172.19.222.210) port 0 AF_INET
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.28      24.83
>
> Your send message size is exactly your socket size. It is also
> the size of the default write buffer. The kernel uses half the
> buffer (very roughly) for data
>
> Were you testing with 65536 bytes exactly for some reason?
> This is stop and go traffic and normally the kernel doesn't
> use the entire buffer to store data - it's roughly half...
>
> Could you test with different send sizes?
>
> > FROM domain-0 to domain-0:
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> > (137.110.222.236) port 0 AF_INET
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.11     280.62
> >
> > Here's the setting of the network buffer:
> >
> > net.core.wmem_max = 8388608
> > net.core.rmem_max = 8388608
> > net.ipv4.tcp_bic = 1
> > net.ipv4.tcp_rmem = 4096        87380   8388608
> > net.ipv4.tcp_wmem = 4096        65536   8388608
> >
> > Does anyone know why the performance across two VMs is so bad? Any fix
> > to it? Thank you.
>
> If you just want to improve your peformance, increase your
> buffer sizes!
>
> For example:
> tcp_rmem = 4096 1398080 8388608
> tcp_wmem = 4096 1398080 8388608
>
> Were you seeing losses, queue overflows?
>
> More importantly, how much memory do you have in the system and
> how were you allocating it?
>
>
> thanks,
> Nivedita
>

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Xen User
Cherie Cheung wrote:

> Hi,
>
> Thanks for answering me. Here's what I have:
>
>
>>Were you testing with 65536 bytes exactly for some reason?
>>This is stop and go traffic and normally the kernel doesn't
>>use the entire buffer to store data - it's roughly half...
>>
>>Could you test with different send sizes?
>
>
> No special reason for that. What do you mean by kernel doesn't use the
> entire buffer to store the data? I have tried different send size. It
> doesn't make any noticable difference.
>
>
>>If you just want to improve your peformance, increase your
>>buffer sizes!
>>
>>For example:
>>tcp_rmem = 4096 1398080 8388608
>>tcp_wmem = 4096 1398080 8388608
>
>
> The performance only improved a little.
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> (172.19.222.215) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
> 1398080 1398080 1398080    80.39      26.55  
>
> can't compare with that of domain0 to domain0.
>
>
>>Were you seeing losses, queue overflows?
>
> how to check that?
>
>
>>More importantly, how much memory do you have in the system and
>>how were you allocating it?
>
> it said 127MB in sudo xm list
>
> is it really the problem with the buffer size and send size? domain0
> can achieve such good performance under the same settings. Is the
> bottleneck related to the overhead in the VM that causes the problem?
>
> also, I had performed some more tests:
> with bandwidth 150Mbit/s and RTT 40ms
>
> domain0 to domain0
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.17     135.01  
>  
> vm to vm
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.55     134.80  
>
> under these setting, VM to VM performed as good as domain0 to domain0.
> if I increased or decreased the BDP, the performance dropped again.

Hi Cherie,

Please pardon my ignorance.  What is BDP?

TIA

>
> Any idea what is causing the problem?
>
> Thanks.
>
> Cherie
>
>
>
> On 5/26/05, Nivedita Singhvi <[hidden email]> wrote:
>
>>Cherie Cheung wrote:
>>
>>>Hi,
>>>
>>>I have been simulating a network using dummynet and evaluating it
>>
>>I haven't played with dummynet and don't know if there are
>>additional issues inherent in using dummynet itself...
>>
>>
>>>using netperf. Xen3.0-unstable is used and the VMs are
>>>vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
>>>Using netperf, I sent data using TCP from domain-0 of machine 1 to
>>>domain-0 of machine 2. Then I repeat the experiment, but this time
>>>from VM-1 of machine 1 to VM-1 of machine 2.
>>>
>>>However, the performance across the two VMs is substantially worse
>>>than that across domain-0. Here's the result:
>>>
>>>FROM VM to VM:
>>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
>>>(172.19.222.210) port 0 AF_INET
>>>Recv   Send    Send
>>>Socket Socket  Message  Elapsed
>>>Size   Size    Size     Time     Throughput
>>>bytes  bytes   bytes    secs.    10^6bits/sec
>>>
>>> 87380  65536  65536    80.28      24.83
>>
>>Your send message size is exactly your socket size. It is also
>>the size of the default write buffer. The kernel uses half the
>>buffer (very roughly) for data
>>
>>Were you testing with 65536 bytes exactly for some reason?
>>This is stop and go traffic and normally the kernel doesn't
>>use the entire buffer to store data - it's roughly half...
>>
>>Could you test with different send sizes?
>>
>>
>>>FROM domain-0 to domain-0:
>>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
>>>(137.110.222.236) port 0 AF_INET
>>>Recv   Send    Send
>>>Socket Socket  Message  Elapsed
>>>Size   Size    Size     Time     Throughput
>>>bytes  bytes   bytes    secs.    10^6bits/sec
>>>
>>> 87380  65536  65536    80.11     280.62
>>>
>>>Here's the setting of the network buffer:
>>>
>>>net.core.wmem_max = 8388608
>>>net.core.rmem_max = 8388608
>>>net.ipv4.tcp_bic = 1
>>>net.ipv4.tcp_rmem = 4096        87380   8388608
>>>net.ipv4.tcp_wmem = 4096        65536   8388608
>>>
>>>Does anyone know why the performance across two VMs is so bad? Any fix
>>>to it? Thank you.
>>
>>If you just want to improve your peformance, increase your
>>buffer sizes!
>>
>>For example:
>>tcp_rmem = 4096 1398080 8388608
>>tcp_wmem = 4096 1398080 8388608
>>
>>Were you seeing losses, queue overflows?
>>
>>More importantly, how much memory do you have in the system and
>>how were you allocating it?
>>
>>
>>thanks,
>>Nivedita
>>
>
>
> _______________________________________________
> Xen-users mailing list
> [hidden email]
> http://lists.xensource.com/xen-users
>
>


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Kip Macy
Bandwidth Delay Product - google can give you better examples than I.

On 5/26/05, Xen User <[hidden email]> wrote:

> Cherie Cheung wrote:
> > Hi,
> >
> > Thanks for answering me. Here's what I have:
> >
> >
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >
> >
> > No special reason for that. What do you mean by kernel doesn't use the
> > entire buffer to store the data? I have tried different send size. It
> > doesn't make any noticable difference.
> >
> >
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >
> >
> > The performance only improved a little.
> >
> > TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> > (172.19.222.215) port 0 AF_INET
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> > 1398080 1398080 1398080    80.39      26.55
> >
> > can't compare with that of domain0 to domain0.
> >
> >
> >>Were you seeing losses, queue overflows?
> >
> > how to check that?
> >
> >
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >
> > it said 127MB in sudo xm list
> >
> > is it really the problem with the buffer size and send size? domain0
> > can achieve such good performance under the same settings. Is the
> > bottleneck related to the overhead in the VM that causes the problem?
> >
> > also, I had performed some more tests:
> > with bandwidth 150Mbit/s and RTT 40ms
> >
> > domain0 to domain0
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.17     135.01
> >
> > vm to vm
> > Recv   Send    Send
> > Socket Socket  Message  Elapsed
> > Size   Size    Size     Time     Throughput
> > bytes  bytes   bytes    secs.    10^6bits/sec
> >
> >  87380  65536  65536    80.55     134.80
> >
> > under these setting, VM to VM performed as good as domain0 to domain0.
> > if I increased or decreased the BDP, the performance dropped again.
>
> Hi Cherie,
>
> Please pardon my ignorance.  What is BDP?
>
> TIA
>
> >
> > Any idea what is causing the problem?
> >
> > Thanks.
> >
> > Cherie
> >
> >
> >
> > On 5/26/05, Nivedita Singhvi <[hidden email]> wrote:
> >
> >>Cherie Cheung wrote:
> >>
> >>>Hi,
> >>>
> >>>I have been simulating a network using dummynet and evaluating it
> >>
> >>I haven't played with dummynet and don't know if there are
> >>additional issues inherent in using dummynet itself...
> >>
> >>
> >>>using netperf. Xen3.0-unstable is used and the VMs are
> >>>vmlinuz-2.6.11-xenU. The simulated link is 300Mbps with 80ms RTT.
> >>>Using netperf, I sent data using TCP from domain-0 of machine 1 to
> >>>domain-0 of machine 2. Then I repeat the experiment, but this time
> >>>from VM-1 of machine 1 to VM-1 of machine 2.
> >>>
> >>>However, the performance across the two VMs is substantially worse
> >>>than that across domain-0. Here's the result:
> >>>
> >>>FROM VM to VM:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw10.ucsd.edu
> >>>(172.19.222.210) port 0 AF_INET
> >>>Recv   Send    Send
> >>>Socket Socket  Message  Elapsed
> >>>Size   Size    Size     Time     Throughput
> >>>bytes  bytes   bytes    secs.    10^6bits/sec
> >>>
> >>> 87380  65536  65536    80.28      24.83
> >>
> >>Your send message size is exactly your socket size. It is also
> >>the size of the default write buffer. The kernel uses half the
> >>buffer (very roughly) for data
> >>
> >>Were you testing with 65536 bytes exactly for some reason?
> >>This is stop and go traffic and normally the kernel doesn't
> >>use the entire buffer to store data - it's roughly half...
> >>
> >>Could you test with different send sizes?
> >>
> >>
> >>>FROM domain-0 to domain-0:
> >>>TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to damp.ucsd.edu
> >>>(137.110.222.236) port 0 AF_INET
> >>>Recv   Send    Send
> >>>Socket Socket  Message  Elapsed
> >>>Size   Size    Size     Time     Throughput
> >>>bytes  bytes   bytes    secs.    10^6bits/sec
> >>>
> >>> 87380  65536  65536    80.11     280.62
> >>>
> >>>Here's the setting of the network buffer:
> >>>
> >>>net.core.wmem_max = 8388608
> >>>net.core.rmem_max = 8388608
> >>>net.ipv4.tcp_bic = 1
> >>>net.ipv4.tcp_rmem = 4096        87380   8388608
> >>>net.ipv4.tcp_wmem = 4096        65536   8388608
> >>>
> >>>Does anyone know why the performance across two VMs is so bad? Any fix
> >>>to it? Thank you.
> >>
> >>If you just want to improve your peformance, increase your
> >>buffer sizes!
> >>
> >>For example:
> >>tcp_rmem = 4096 1398080 8388608
> >>tcp_wmem = 4096 1398080 8388608
> >>
> >>Were you seeing losses, queue overflows?
> >>
> >>More importantly, how much memory do you have in the system and
> >>how were you allocating it?
> >>
> >>
> >>thanks,
> >>Nivedita
> >>
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > [hidden email]
> > http://lists.xensource.com/xen-users
> >
> >
>
>
> _______________________________________________
> Xen-users mailing list
> [hidden email]
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] Network performance - sending from VM to VM using TCP

Nivedita Singhvi
In reply to this post by Cherie Cheung
Cherie Cheung wrote:
>
>>Could you test with different send sizes?
>
>
> No special reason for that. What do you mean by kernel doesn't use the
> entire buffer to store the data? I have tried different send size. It
> doesn't make any noticable difference.

Normally, if you do a write that fits in the send buffer,
the write will return immediately. If you don't have enough
room, it will block until the buffer drains and there is
enough room. Normally, the kernel reserves a fraction of
the socket buffer space for internal kernel data management.
If you do a setsockopt of 128K bytes, for instance, and then
do a getsockopt(), you will notice that the kernel will report
twice what you asked for.


> The performance only improved a little.
>
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
> (172.19.222.215) port 0 AF_INET
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
> 1398080 1398080 1398080    80.39      26.55  

Ah, the idea is not to use such a large send message
size! Increase your buffer sizes - but not your send
message size..Not sure if netperf handles that well -
this is a memory allocation issue. netperf is an intensive
application in TCP streams - the application does no disk
activity - it's generating data on the fly, and doing
repeated writes of that amount.  You might just be
blocking on memory.

I'd be very interested in what you get with those buffer
sizes and 1K, 4K, 16K message sizes..

> can't compare with that of domain0 to domain0.

So both domains have 128MB? Can you bump that up to, say, 512MB?

>>Were you seeing losses, queue overflows?
>
> how to check that?

you can do a netstat -s, ifconfig, for instance.

> is it really the problem with the buffer size and send size? domain0
> can achieve such good performance under the same settings. Is the
> bottleneck related to the overhead in the VM that causes the problem?
>
> also, I had performed some more tests:
> with bandwidth 150Mbit/s and RTT 40ms
>
> domain0 to domain0
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.17     135.01  
>  
> vm to vm
> Recv   Send    Send                          
> Socket Socket  Message  Elapsed              
> Size   Size    Size     Time     Throughput  
> bytes  bytes   bytes    secs.    10^6bits/sec  
>
>  87380  65536  65536    80.55     134.80  
>
> under these setting, VM to VM performed as good as domain0 to domain0.
> if I increased or decreased the BDP, the performance dropped again.

Very interesting - possibly you're managing to send
closer to your real bandwidth-delay-product? Would be
interesting to get the numbers across a range of RTTs.

thanks,
Nivedita



_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

RE: Network performance - sending from VM to VM using TCP

Ian Pratt
In reply to this post by Cherie Cheung
 > I have been simulating a network using dummynet and
> evaluating it using netperf. Xen3.0-unstable is used and the
> VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine
> 1 to domain-0 of machine 2. Then I repeat the experiment, but
> this time from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially
> worse than that across domain-0. Here's the result:

Someone else was having problems with low performance via dummynet a
couple of months back. It's presumably dummynet's packet scheduling
causing some bad interaction with the batch processing of packets in
netfront/back.

The first step to understanding this is probably to capture a tcpdump
and look at it with tcptrace to see what's happening with window sizes
and scheduling of packets.

Ian

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] RE: Network performance - sending from VM to VM using TCP

Bin Ren
Cherie:

I've tried to repeat the testing and here are the results:

Basic set up: xen machine runs latest xen-unstable and Debian sarge;
server runs latest Gentoo linux (native). both have intel e1000 mt
NICs and connect directly throught a 1Gbps switch.

(1) AFAIK, dummynet is for FreeBSD only, so I use the Linux kernel
network emulator module
(http://developer.osdl.org/shemminger/netem/index.html) and sets the
delay of server eth0 to 10ms using command 'tc qdisc add dev eth0 root
netem delay 10ms'.

(2) With linux kernel default networking settings, (i.e. no tcp
tuning): netperf -H server -l 30:

without delay, without tuning
dom0->server: 665Mbps
dom1->server: 490Mbps

with 10ms delay, without tuning
dom0->server: 82Mbps
dom1->server: 73Mbps

Note that *both* dom0 and dom1 show significant throughput drops. This
is different from what you've seen.

(3) Add linux tcp tuning
(http://www-didc.lbl.gov/TCP-tuning/linux.html), netperf -H server -l
30:

without delay, with tuning
dom0->server: 654Mbps
dom1->server: 488Mbps

with 10ms delay, with tuning
dom0->server: 610Mbps
dom1->server: 480Mbps

Note: without delay, tuning doesn't provide gains in throughputs.
however, with delay, both dom0 and dom1 see only *slight* drop in
throughputs. This makes sense as linux tcp/ip stack needs tuning for
very long-fat pipes. In your case, 300Mbps + 80ms seems to emulate
transcontinental links. Still, YMMV.

- Bin

On 5/27/05, Ian Pratt <[hidden email]> wrote:

>  > I have been simulating a network using dummynet and
> > evaluating it using netperf. Xen3.0-unstable is used and the
> > VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> > with 80ms RTT.
> > Using netperf, I sent data using TCP from domain-0 of machine
> > 1 to domain-0 of machine 2. Then I repeat the experiment, but
> > this time from VM-1 of machine 1 to VM-1 of machine 2.
> >
> > However, the performance across the two VMs is substantially
> > worse than that across domain-0. Here's the result:
>
> Someone else was having problems with low performance via dummynet a
> couple of months back. It's presumably dummynet's packet scheduling
> causing some bad interaction with the batch processing of packets in
> netfront/back.
>
> The first step to understanding this is probably to capture a tcpdump
> and look at it with tcptrace to see what's happening with window sizes
> and scheduling of packets.
>
> Ian
>
> _______________________________________________
> Xen-devel mailing list
> [hidden email]
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|

Re: [Xen-devel] RE: Network performance - sending from VM to VM using TCP

Cherie Cheung
Bin,

Thank you so much. I'll test that out to try to obtain these results.

Cherie

On 5/28/05, Bin Ren <[hidden email]> wrote:

> Cherie:
>
> I've tried to repeat the testing and here are the results:
>
> Basic set up: xen machine runs latest xen-unstable and Debian sarge;
> server runs latest Gentoo linux (native). both have intel e1000 mt
> NICs and connect directly throught a 1Gbps switch.
>
> (1) AFAIK, dummynet is for FreeBSD only, so I use the Linux kernel
> network emulator module
> (http://developer.osdl.org/shemminger/netem/index.html) and sets the
> delay of server eth0 to 10ms using command 'tc qdisc add dev eth0 root
> netem delay 10ms'.
>
> (2) With linux kernel default networking settings, (i.e. no tcp
> tuning): netperf -H server -l 30:
>
> without delay, without tuning
> dom0->server: 665Mbps
> dom1->server: 490Mbps
>
> with 10ms delay, without tuning
> dom0->server: 82Mbps
> dom1->server: 73Mbps
>
> Note that *both* dom0 and dom1 show significant throughput drops. This
> is different from what you've seen.
>
> (3) Add linux tcp tuning
> (http://www-didc.lbl.gov/TCP-tuning/linux.html), netperf -H server -l
> 30:
>
> without delay, with tuning
> dom0->server: 654Mbps
> dom1->server: 488Mbps
>
> with 10ms delay, with tuning
> dom0->server: 610Mbps
> dom1->server: 480Mbps
>
> Note: without delay, tuning doesn't provide gains in throughputs.
> however, with delay, both dom0 and dom1 see only *slight* drop in
> throughputs. This makes sense as linux tcp/ip stack needs tuning for
> very long-fat pipes. In your case, 300Mbps + 80ms seems to emulate
> transcontinental links. Still, YMMV.
>
> - Bin
>
> On 5/27/05, Ian Pratt <[hidden email]> wrote:
> >  > I have been simulating a network using dummynet and
> > > evaluating it using netperf. Xen3.0-unstable is used and the
> > > VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> > > with 80ms RTT.
> > > Using netperf, I sent data using TCP from domain-0 of machine
> > > 1 to domain-0 of machine 2. Then I repeat the experiment, but
> > > this time from VM-1 of machine 1 to VM-1 of machine 2.
> > >
> > > However, the performance across the two VMs is substantially
> > > worse than that across domain-0. Here's the result:
> >
> > Someone else was having problems with low performance via dummynet a
> > couple of months back. It's presumably dummynet's packet scheduling
> > causing some bad interaction with the batch processing of packets in
> > netfront/back.
> >
> > The first step to understanding this is probably to capture a tcpdump
> > and look at it with tcptrace to see what's happening with window sizes
> > and scheduling of packets.
> >
> > Ian
> >
> > _______________________________________________
> > Xen-devel mailing list
> > [hidden email]
> > http://lists.xensource.com/xen-devel
> >
>

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users