a strange issue on disk scheduler in xen

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

a strange issue on disk scheduler in xen

smiler
This post has NOT been accepted by the mailing list yet.
Hi all,

In Xen 4.0 with Linux 2.6.18.8, noop is default scheduler in dom U. In Xen 4.0 with Linux 2.6.31.8, CFQ is default scheduler in domU. In the both setup, CFQ is default scheduler in dom0. I compare the execution time of two seqenatil read applications run on a domU under noop of domU with under cfq of domU in both two Linux kernel.
The sequential read application is generated by sysbench to read 1gb, like that: sysbench --num-threads=16 --test=fileio --file-total-size=1G --file-test-mode=seqrd prepare/run/cleanup

I find a strange phenomenon as following:
in Xen 4.0 with Linux 2.6.18.8
noop in domU
seqrd 1: 34.90s seqrd2: 32.49s
cfq in domU
seqrd 1: 245.94s seqrd2: 256.09s
In this situation, the execution time of seqrd under CFQ of domU is much worse than that under noop of domU.
However, in Xen 4.0 with Linux 2.6.31.8:
noop in domU
seqrd1: 36.83s seqrd 2: 37.01s
CFQ in domU
seqrd1: 35.68s seqrd 2: 35.76s
In this situation, the execution time of seqrd under CFQ of domU is almost same as under noop of domU.

The comparsion between noop and cfq of domU in Xen 4.0 with Linux 2.6.18 is so different from that in Xen 4.0 with Linux 2.6.31.8, It is very very strange. I also do the experiment in differnt machines, the results are same. So does anyone know the reason?