Quantcast

RE: [Xen-users] Network performance - sending from VM to VM using TCP

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: [Xen-users] Network performance - sending from VM to VM using TCP

Ian Pratt
 > I have been simulating a network using dummynet and
> evaluating it using netperf. Xen3.0-unstable is used and the
> VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> with 80ms RTT.
> Using netperf, I sent data using TCP from domain-0 of machine
> 1 to domain-0 of machine 2. Then I repeat the experiment, but
> this time from VM-1 of machine 1 to VM-1 of machine 2.
>
> However, the performance across the two VMs is substantially
> worse than that across domain-0. Here's the result:

Someone else was having problems with low performance via dummynet a
couple of months back. It's presumably dummynet's packet scheduling
causing some bad interaction with the batch processing of packets in
netfront/back.

The first step to understanding this is probably to capture a tcpdump
and look at it with tcptrace to see what's happening with window sizes
and scheduling of packets.

Ian

_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xensource.com/xen-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: RE: [Xen-users] Network performance - sending from VM to VM using TCP

Bin Ren
Cherie:

I've tried to repeat the testing and here are the results:

Basic set up: xen machine runs latest xen-unstable and Debian sarge;
server runs latest Gentoo linux (native). both have intel e1000 mt
NICs and connect directly throught a 1Gbps switch.

(1) AFAIK, dummynet is for FreeBSD only, so I use the Linux kernel
network emulator module
(http://developer.osdl.org/shemminger/netem/index.html) and sets the
delay of server eth0 to 10ms using command 'tc qdisc add dev eth0 root
netem delay 10ms'.

(2) With linux kernel default networking settings, (i.e. no tcp
tuning): netperf -H server -l 30:

without delay, without tuning
dom0->server: 665Mbps
dom1->server: 490Mbps

with 10ms delay, without tuning
dom0->server: 82Mbps
dom1->server: 73Mbps

Note that *both* dom0 and dom1 show significant throughput drops. This
is different from what you've seen.

(3) Add linux tcp tuning
(http://www-didc.lbl.gov/TCP-tuning/linux.html), netperf -H server -l
30:

without delay, with tuning
dom0->server: 654Mbps
dom1->server: 488Mbps

with 10ms delay, with tuning
dom0->server: 610Mbps
dom1->server: 480Mbps

Note: without delay, tuning doesn't provide gains in throughputs.
however, with delay, both dom0 and dom1 see only *slight* drop in
throughputs. This makes sense as linux tcp/ip stack needs tuning for
very long-fat pipes. In your case, 300Mbps + 80ms seems to emulate
transcontinental links. Still, YMMV.

- Bin

On 5/27/05, Ian Pratt <[hidden email]> wrote:

>  > I have been simulating a network using dummynet and
> > evaluating it using netperf. Xen3.0-unstable is used and the
> > VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> > with 80ms RTT.
> > Using netperf, I sent data using TCP from domain-0 of machine
> > 1 to domain-0 of machine 2. Then I repeat the experiment, but
> > this time from VM-1 of machine 1 to VM-1 of machine 2.
> >
> > However, the performance across the two VMs is substantially
> > worse than that across domain-0. Here's the result:
>
> Someone else was having problems with low performance via dummynet a
> couple of months back. It's presumably dummynet's packet scheduling
> causing some bad interaction with the batch processing of packets in
> netfront/back.
>
> The first step to understanding this is probably to capture a tcpdump
> and look at it with tcptrace to see what's happening with window sizes
> and scheduling of packets.
>
> Ian
>
> _______________________________________________
> Xen-devel mailing list
> [hidden email]
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xensource.com/xen-devel
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: RE: [Xen-users] Network performance - sending from VM to VM using TCP

Cherie Cheung
Bin,

Thank you so much. I'll test that out to try to obtain these results.

Cherie

On 5/28/05, Bin Ren <[hidden email]> wrote:

> Cherie:
>
> I've tried to repeat the testing and here are the results:
>
> Basic set up: xen machine runs latest xen-unstable and Debian sarge;
> server runs latest Gentoo linux (native). both have intel e1000 mt
> NICs and connect directly throught a 1Gbps switch.
>
> (1) AFAIK, dummynet is for FreeBSD only, so I use the Linux kernel
> network emulator module
> (http://developer.osdl.org/shemminger/netem/index.html) and sets the
> delay of server eth0 to 10ms using command 'tc qdisc add dev eth0 root
> netem delay 10ms'.
>
> (2) With linux kernel default networking settings, (i.e. no tcp
> tuning): netperf -H server -l 30:
>
> without delay, without tuning
> dom0->server: 665Mbps
> dom1->server: 490Mbps
>
> with 10ms delay, without tuning
> dom0->server: 82Mbps
> dom1->server: 73Mbps
>
> Note that *both* dom0 and dom1 show significant throughput drops. This
> is different from what you've seen.
>
> (3) Add linux tcp tuning
> (http://www-didc.lbl.gov/TCP-tuning/linux.html), netperf -H server -l
> 30:
>
> without delay, with tuning
> dom0->server: 654Mbps
> dom1->server: 488Mbps
>
> with 10ms delay, with tuning
> dom0->server: 610Mbps
> dom1->server: 480Mbps
>
> Note: without delay, tuning doesn't provide gains in throughputs.
> however, with delay, both dom0 and dom1 see only *slight* drop in
> throughputs. This makes sense as linux tcp/ip stack needs tuning for
> very long-fat pipes. In your case, 300Mbps + 80ms seems to emulate
> transcontinental links. Still, YMMV.
>
> - Bin
>
> On 5/27/05, Ian Pratt <[hidden email]> wrote:
> >  > I have been simulating a network using dummynet and
> > > evaluating it using netperf. Xen3.0-unstable is used and the
> > > VMs are vmlinuz-2.6.11-xenU. The simulated link is 300Mbps
> > > with 80ms RTT.
> > > Using netperf, I sent data using TCP from domain-0 of machine
> > > 1 to domain-0 of machine 2. Then I repeat the experiment, but
> > > this time from VM-1 of machine 1 to VM-1 of machine 2.
> > >
> > > However, the performance across the two VMs is substantially
> > > worse than that across domain-0. Here's the result:
> >
> > Someone else was having problems with low performance via dummynet a
> > couple of months back. It's presumably dummynet's packet scheduling
> > causing some bad interaction with the batch processing of packets in
> > netfront/back.
> >
> > The first step to understanding this is probably to capture a tcpdump
> > and look at it with tcptrace to see what's happening with window sizes
> > and scheduling of packets.
> >
> > Ian
> >
> > _______________________________________________
> > Xen-devel mailing list
> > [hidden email]
> > http://lists.xensource.com/xen-devel
> >
>

_______________________________________________
Xen-devel mailing list
[hidden email]
http://lists.xensource.com/xen-devel
Loading...