Stable VBD Types

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Stable VBD Types

Alex Adranghi
For a stable deployable, yet flexible solution, what VBD types do people
here reommend.

I know from the Documentation (and common-sense) that file-backed vbd's
will die for I/O intensive activities, so I'm leaning towards LVM as a
solution.

One thing that has caught my eye is the snapshot ability for a logical
volume, but after a little more research and dipping into the archives
I've heard nightmare stories with using snapshots with Xen. Is this
still the case.

Am I just better off simply installing domains as I need them? Thanks.

--
Regards,

Alex Adranghi
---------------------------
http://www.alexadranghi.com
[hidden email]

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users

signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Stable VBD Types

andy smith-10
On Mon, Jun 20, 2005 at 05:23:26PM +0100, Alex Adranghi wrote:
> For a stable deployable, yet flexible solution, what VBD types do people
> here reommend.
>
> I know from the Documentation (and common-sense) that file-backed vbd's
> will die for I/O intensive activities, so I'm leaning towards LVM as a
> solution.

I use LVM for that reason and just because I'm familiar with LVM
alreayd and see no need to be adding file-backed VBDs into the mix.

> One thing that has caught my eye is the snapshot ability for a logical
> volume, but after a little more research and dipping into the archives
> I've heard nightmare stories with using snapshots with Xen. Is this
> still the case.

I'm currently using snapshotting for backup purposes, snapshotting
around 20 LVs every 20 minutes and mounting them in dom0.

I haven't had any problems with this except that with 256M RAM my
dom0 ran out of kernel memory and an lvcreate operation deadlocked.
I reported this to the LVM list and was told it was because I had
too little RAM.  After upgrading dom0 to 512M I have had no such
problems, although I still am not sure exactly how much RAM each
snapshot will need.

Today I saw a question on the linux-lvm list asking if snapshots
were considered stable in LVM2 now.  Maybe following that thread
would be more helpful than my anecdotes.

> Am I just better off simply installing domains as I need them? Thanks.

How do you mean?  If you start domains that you don't need then the
RAM is tied up in those domains.  If you wanted some other domain to
increase its RAM then it may be a laborious process taking a bit of
RAM from multiple other domains first.  Also the idle processes in
all those domains would take a small amount of CPU away.  Those are
about the only problems I can see with running domains you don't
immediately need.

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users

signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Stable VBD Types

Alex Adranghi

>
> I'm currently using snapshotting for backup purposes, snapshotting
> around 20 LVs every 20 minutes and mounting them in dom0.
>
> I haven't had any problems with this except that with 256M RAM my
> dom0 ran out of kernel memory and an lvcreate operation deadlocked.
> I reported this to the LVM list and was told it was because I had
> too little RAM.  After upgrading dom0 to 512M I have had no such
> problems, although I still am not sure exactly how much RAM each
> snapshot will need.
>
> Today I saw a question on the linux-lvm list asking if snapshots
> were considered stable in LVM2 now.  Maybe following that thread
> would be more helpful than my anecdotes.
Thanks. Although this is relevent to what I'm wanting to do, what I
meant to say was using lvm snapshots as a way to reduce common files
from repeating on disk. I quote the LVM-HowTo as it explains it a little
better.

"It is also useful for creating volumes for use with Xen. You can create
a disk image, then snapshot it and modify the snapshot for a particular
domU instance. You can then create another snapshot of the original
volume, and modify that one for a different domU instance. Since the
only storage used by a snapshot is blocks that were changed on the
origin or the snapshot, the majority of the volume is shared by the
domU's."


>
> How do you mean?  If you start domains that you don't need then the
> RAM is tied up in those domains.  If you wanted some other domain to
> increase its RAM then it may be a laborious process taking a bit of
> RAM from multiple other domains first.  Also the idle processes in
> all those domains would take a small amount of CPU away.  Those are
> about the only problems I can see with running domains you don't
> immediately need.

I mean't as in create them when they are needed, rather than start them
sorry.

Thanks


--
Regards,

Alex Adranghi
---------------------------
http://www.alexadranghi.com
[hidden email]

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users

signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Stable VBD Types

Nils Toedtmann
Am Montag, den 20.06.2005, 18:11 +0100 schrieb Alex Adranghi:
[...]

> Thanks. Although this is relevent to what I'm wanting to do, what I
> meant to say was using lvm snapshots as a way to reduce common files
> from repeating on disk. I quote the LVM-HowTo as it explains it a little
> better.
>
> "It is also useful for creating volumes for use with Xen. You can create
> a disk image, then snapshot it and modify the snapshot for a particular
> domU instance. You can then create another snapshot of the original
> volume, and modify that one for a different domU instance. Since the
> only storage used by a snapshot is blocks that were changed on the
> origin or the snapshot, the majority of the volume is shared by the
> domU's."
[...]

We use the device-mapper for this. Works for us [tm]:

  dd if=/dev/zero of=/tmp/CoW1 bs=1M count=$CoW_SIZE
  dd if=/dev/zero of=/tmp/CoW2 bs=1M count=$CoW_SIZE

  losetup /dev/loop0 root_fs
  losetup /dev/loop1 /tmp/CoW1
  losetup /dev/loop2 /tmp/CoW2
 
  BLOCKSIZE=`blockdev --getsize /dev/loop0`

  echo "0 $BLOCKSIZE linear                           /dev/loop0   0" \
    | dmsetup create rootfs_base

  echo "0 $BLOCKSIZE snapshot /dev/mapper/rootfs_base /dev/loop1 p 8" \
    | dmsetup create rootfs1

  echo "0 $BLOCKSIZE snapshot /dev/mapper/rootfs_base /dev/loop2 p 8" \
    | dmsetup create rootfs2

You should now be able to use /dev/mapper/rootfs{1|2} read/write. Only
diff blocks to "root_fs" get written to "/tmp/CoW{1|2}".

This way we boot >20 domUs off one root_fs (after shutdown we delete the
CoW files). I have now clue how to do that with LVM[2].

/nils.  

--
there is no sig


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users