Quantcast

iscsi vs nfs for xen VMs

classic Classic list List threaded Threaded
135 messages Options
1234 ... 7
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

iscsi vs nfs for xen VMs

Jia Rao
Hi all,

I used to host the disk images of my xen VMs in a nfs server and am considering move to iscsi for performance purpose.
Here is the problem I encountered:

For iscsi, there are two ways to export the virtual disk to the physical machine hosting the VMs.

1. Export the virtual disk (at the target side , it is either an img file or a lvm) as a physical device, e.g sdc, then boot the VMs using "phy:/dev/sdc".

2. Export the partition containing the virtual disks (in this case, the virtual disks are img files) to each physical machine as a physical device, and then on each physical machine I mount the new device to the file system. In this way, the img files are accessible from each physical machine (similar as nfs), and the VMs are booted using tapdisk "tap:aio/PATH_TO_IMGS_FILES".

I prefer the second approach because I need tapdisk (each virtual disk is a process in host machines) to control the IO priority among VMs.

However, there is a problem when I share the LUN containing all the VM img files among multiple hosts.
It seems that any modifications to the LUN (by writing some data to folder that mounted LUN ) is not immediate observable at other hosts sharing the LUN (In the case of nfs, the changes are immediate synchronized at all the nfs clients). The changes are only observable when I umount the LUN and remount it on other physical hosts.

I searched the Internet, it seems that iscsi is not intended for sharing a single LUN between multiple hosts.
Is it true or ,I need some specific configuration of the target or initiator to make the changes immediately synchronized at multiple initiator?

Thanks in advance,
Jia

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Javier Guerra Giraldez
On Fri, Aug 7, 2009 at 8:48 AM, Jia Rao<[hidden email]> wrote:
> I searched the Internet, it seems that iscsi is not intended for sharing a
> single LUN between multiple hosts.

the point isn't iSCSI, it's the filesystem in it.

a 'normal' filesystem (ext3, XFS, JFS, etc) isn't safe to mount at
multiple hosts simultaneously.

a cluster filesystem (GFS, OCFS, clusterXFS, etc) is specifically
designed for it.

in short, you have three options for sharing storage between Xen boxes:

1: NFS with file-based images
2: share a LUN, put a cluster filesystem on it, and put file-based images inside
3: multiple LUNs, each one is a blockdevice-based image

--
Javier

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Christopher Chen
In reply to this post by Jia Rao
Jia:

You're partially correct. iSCSI as a protocol has no problem allowing
multiple initiators access to the same block device, but you're almost
certain to run into corruption if you don't set up a higher-level
locking mechanism to make sure your access is consistent across all
devices.

To state again: iSCSI is not in itself a protocol that provides all
the features necessary for a shared filesystem.

If you want to do that, you need to look into the shared filesystem
space (OCFS2, GFS, etc).

The other option is to set up individual logical volumes on the shared
LUN for each VM. Note that this still requires a inter-machine locking
protocol--in my case, Clustered LVM.

There are quite a few of us who have gone ahead and used clustered LVM
with the phy handler--this gives us the consistency on LVM data across
the multiple machines, while we administratively restrict access to
each logical volume to one machine at a time (unless we're doing a
live migration).

I hope this helps.

Cheers

cc

On Fri, Aug 7, 2009 at 6:48 AM, Jia Rao<[hidden email]> wrote:

> Hi all,
>
> I used to host the disk images of my xen VMs in a nfs server and am
> considering move to iscsi for performance purpose.
> Here is the problem I encountered:
>
> For iscsi, there are two ways to export the virtual disk to the physical
> machine hosting the VMs.
>
> 1. Export the virtual disk (at the target side , it is either an img file or
> a lvm) as a physical device, e.g sdc, then boot the VMs using
> "phy:/dev/sdc".
>
> 2. Export the partition containing the virtual disks (in this case, the
> virtual disks are img files) to each physical machine as a physical device,
> and then on each physical machine I mount the new device to the file system.
> In this way, the img files are accessible from each physical machine
> (similar as nfs), and the VMs are booted using tapdisk
> "tap:aio/PATH_TO_IMGS_FILES".
>
> I prefer the second approach because I need tapdisk (each virtual disk is a
> process in host machines) to control the IO priority among VMs.
>
> However, there is a problem when I share the LUN containing all the VM img
> files among multiple hosts.
> It seems that any modifications to the LUN (by writing some data to folder
> that mounted LUN ) is not immediate observable at other hosts sharing the
> LUN (In the case of nfs, the changes are immediate synchronized at all the
> nfs clients). The changes are only observable when I umount the LUN and
> remount it on other physical hosts.
>
> I searched the Internet, it seems that iscsi is not intended for sharing a
> single LUN between multiple hosts.
> Is it true or ,I need some specific configuration of the target or initiator
> to make the changes immediately synchronized at multiple initiator?
>
> Thanks in advance,
> Jia
>
> _______________________________________________
> Xen-users mailing list
> [hidden email]
> http://lists.xensource.com/xen-users
>



--
Chris Chen <[hidden email]>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Jia Rao
Thank you very much for the prompt replies.

My intention of moving to iscsi is due to pure performance purpose.
The physical hosts and the storage server are connected through a 1G switch. The storage server uses raid-5 disk array.
My current testing using iozone within VMs on both iscsi and nfs produced similar performance results for sequential and random read.

I was told it will make a big difference if there are 10-15 VMs sharing the storage server. In my case, I have 8-10 VMs.

Any experience with a larger number of VMs in both nfs and iscsi?

Jia.

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

xensource-2
In reply to this post by Christopher Chen
Hi,

Quite funny. We are using Xen over iSCSI for years, and we are turning to Netapp NAS to migrate to NFS for the following reasons :
- Easier setup, no matter how many xen hosts you have.
- Avoid dealing with Cluster Filesystem. (We use OCFS2 quite successfully anyway.)
- Some 3 years ago, we were using nbd + drbd + clvm2 + fenced + ... to have direct lvm in your VM works, but it's too complicated and too complex to maintain, especially if you have an issue or more than 2 dom0.
- According to the bench I received, there is no difference in performances between NFS and iSCSI. There are some differences in some cases due to the different levels of cache.

Regards,

François.

----- Original Message -----
From: "Christopher Chen" <[hidden email]>
To: "Jia Rao" <[hidden email]>
Cc: [hidden email]
Sent: Friday, 7 August, 2009 16:00:20 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs

Jia:

You're partially correct. iSCSI as a protocol has no problem allowing
multiple initiators access to the same block device, but you're almost
certain to run into corruption if you don't set up a higher-level
locking mechanism to make sure your access is consistent across all
devices.

To state again: iSCSI is not in itself a protocol that provides all
the features necessary for a shared filesystem.

If you want to do that, you need to look into the shared filesystem
space (OCFS2, GFS, etc).

The other option is to set up individual logical volumes on the shared
LUN for each VM. Note that this still requires a inter-machine locking
protocol--in my case, Clustered LVM.

There are quite a few of us who have gone ahead and used clustered LVM
with the phy handler--this gives us the consistency on LVM data across
the multiple machines, while we administratively restrict access to
each logical volume to one machine at a time (unless we're doing a
live migration).

I hope this helps.

Cheers

cc

On Fri, Aug 7, 2009 at 6:48 AM, Jia Rao<[hidden email]> wrote:

> Hi all,
>
> I used to host the disk images of my xen VMs in a nfs server and am
> considering move to iscsi for performance purpose.
> Here is the problem I encountered:
>
> For iscsi, there are two ways to export the virtual disk to the physical
> machine hosting the VMs.
>
> 1. Export the virtual disk (at the target side , it is either an img file or
> a lvm) as a physical device, e.g sdc, then boot the VMs using
> "phy:/dev/sdc".
>
> 2. Export the partition containing the virtual disks (in this case, the
> virtual disks are img files) to each physical machine as a physical device,
> and then on each physical machine I mount the new device to the file system.
> In this way, the img files are accessible from each physical machine
> (similar as nfs), and the VMs are booted using tapdisk
> "tap:aio/PATH_TO_IMGS_FILES".
>
> I prefer the second approach because I need tapdisk (each virtual disk is a
> process in host machines) to control the IO priority among VMs.
>
> However, there is a problem when I share the LUN containing all the VM img
> files among multiple hosts.
> It seems that any modifications to the LUN (by writing some data to folder
> that mounted LUN ) is not immediate observable at other hosts sharing the
> LUN (In the case of nfs, the changes are immediate synchronized at all the
> nfs clients). The changes are only observable when I umount the LUN and
> remount it on other physical hosts.
>
> I searched the Internet, it seems that iscsi is not intended for sharing a
> single LUN between multiple hosts.
> Is it true or ,I need some specific configuration of the target or initiator
> to make the changes immediately synchronized at multiple initiator?
>
> Thanks in advance,
> Jia
>
> _______________________________________________
> Xen-users mailing list
> [hidden email]
> http://lists.xensource.com/xen-users
>



--
Chris Chen <[hidden email]>
"The fact that yours is better than anyone else's
is not a guarantee that it's any good."
-- Seen on a wall

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Dustin Black-3
In reply to this post by Jia Rao
We are using iSCSI with CLVM and GFS2 very successfully with 10 physical Xen servers and 80 to 100 VMs running across them.  We use file-based disk images, all stored on a single GFS2 file system on a single iSCSI LUN accessible by all 10 Xen servers.


On Fri, Aug 7, 2009 at 10:11 AM, Jia Rao <[hidden email]> wrote:
Thank you very much for the prompt replies.

My intention of moving to iscsi is due to pure performance purpose.
The physical hosts and the storage server are connected through a 1G switch. The storage server uses raid-5 disk array.
My current testing using iozone within VMs on both iscsi and nfs produced similar performance results for sequential and random read.

I was told it will make a big difference if there are 10-15 VMs sharing the storage server. In my case, I have 8-10 VMs.

Any experience with a larger number of VMs in both nfs and iscsi?

Jia.


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Ciro Iriarte
In reply to this post by xensource-2
2009/8/7 François Delpierre <[hidden email]>:

> Hi,
>
> Quite funny. We are using Xen over iSCSI for years, and we are turning to
> Netapp NAS to migrate to NFS for the following reasons :
> - Easier setup, no matter how many xen hosts you have.
> - Avoid dealing with Cluster Filesystem. (We use OCFS2 quite successfully
> anyway.)
> - Some 3 years ago, we were using nbd + drbd + clvm2 + fenced + ... to have
> direct lvm in your VM works, but it's too complicated and too complex to
> maintain, especially if you have an issue or more than 2 dom0.
> - According to the bench I received, there is no difference in performances
> between NFS and iSCSI. There are some differences in some cases due to the
> different levels of cache.
>
> Regards,
>
> François.

Hi, was this benchmark specific to Xen?, is it public?

Regards,

--
Ciro Iriarte
http://cyruspy.wordpress.com
--

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Agustin Lopez

Hi!

We are in the opposite road.
We used NFS but when the IO load was  increased the performance was down
and we must to change to iSCSI. This system is working Ok from several
months.

(We tested NFS with phyle and tap)

Regards,
Agustin




Ciro Iriarte escribió:

> 2009/8/7 François Delpierre <[hidden email]>:
>  
>> Hi,
>>
>> Quite funny. We are using Xen over iSCSI for years, and we are turning to
>> Netapp NAS to migrate to NFS for the following reasons :
>> - Easier setup, no matter how many xen hosts you have.
>> - Avoid dealing with Cluster Filesystem. (We use OCFS2 quite successfully
>> anyway.)
>> - Some 3 years ago, we were using nbd + drbd + clvm2 + fenced + ... to have
>> direct lvm in your VM works, but it's too complicated and too complex to
>> maintain, especially if you have an issue or more than 2 dom0.
>> - According to the bench I received, there is no difference in performances
>> between NFS and iSCSI. There are some differences in some cases due to the
>> different levels of cache.
>>
>> Regards,
>>
>> François.
>>    
>
> Hi, was this benchmark specific to Xen?, is it public?
>
> Regards,
>
>  


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

xensource-2
In reply to this post by Jia Rao
Hi,

This was a study made on VMWare over NFS over Netapp (vs iSCSI and vs FC).
I just checked, and the document is public :
http://www.vmug.be/VMUG/Upload//meetings/VMUGBE06_20081010nfs.pdf

Regards,

François.


----- Original Message -----
From: "Ciro Iriarte" <[hidden email]>
To: "François Delpierre" <[hidden email]>
Cc: [hidden email]
Sent: Friday, 7 August, 2009 17:07:31 GMT +01:00 Amsterdam / Berlin / Bern / Rome / Stockholm / Vienna
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs

2009/8/7 François Delpierre <[hidden email]>:

> Hi,
>
> Quite funny. We are using Xen over iSCSI for years, and we are turning to
> Netapp NAS to migrate to NFS for the following reasons :
> - Easier setup, no matter how many xen hosts you have.
> - Avoid dealing with Cluster Filesystem. (We use OCFS2 quite successfully
> anyway.)
> - Some 3 years ago, we were using nbd + drbd + clvm2 + fenced + ... to have
> direct lvm in your VM works, but it's too complicated and too complex to
> maintain, especially if you have an issue or more than 2 dom0.
> - According to the bench I received, there is no difference in performances
> between NFS and iSCSI. There are some differences in some cases due to the
> different levels of cache.
>
> Regards,
>
> François.

Hi, was this benchmark specific to Xen?, is it public?

Regards,

--
Ciro Iriarte
http://cyruspy.wordpress.com
--

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Fajar A. Nugraha-3
In reply to this post by Jia Rao
On Fri, Aug 7, 2009 at 8:48 PM, Jia Rao<[hidden email]> wrote:
> I prefer the second approach because I need tapdisk (each virtual disk is a
> process in host machines) to control the IO priority among VMs.

You can still control IO priority for block device using dm-ioband.
http://sourceforge.net/apps/trac/ioband/

As for iscsi or nfs, the general rule would be that nfs is easier, but
you need a high-performance nfs server to get decent performance
(NetApp comes to mind).

If you've already tried nfs and not satisfied with its performance,
then you shoud try iscsi. Each domU storage exported as a block
device, imported on dom0 ,and use /dev/disk/by-path/* in domU config.
It should give simple-enough (e.g. no cluster fs required) setup,
decent performance, and it's still possible to control disk I/O
priority and bandwitdh.

--
Fajar

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Jia Rao
dm-ioband seems to work with the new kernels (2.6.31). We are still using 2.6.18.8 and reluctant to upgrade to a new kernel.
Can dm-ioband work with older kernels?

Jia

On Sun, Aug 9, 2009 at 11:11 PM, Fajar A. Nugraha <[hidden email]> wrote:
On Fri, Aug 7, 2009 at 8:48 PM, Jia Rao<[hidden email]> wrote:
> I prefer the second approach because I need tapdisk (each virtual disk is a
> process in host machines) to control the IO priority among VMs.

You can still control IO priority for block device using dm-ioband.
http://sourceforge.net/apps/trac/ioband/

As for iscsi or nfs, the general rule would be that nfs is easier, but
you need a high-performance nfs server to get decent performance
(NetApp comes to mind).

If you've already tried nfs and not satisfied with its performance,
then you shoud try iscsi. Each domU storage exported as a block
device, imported on dom0 ,and use /dev/disk/by-path/* in domU config.
It should give simple-enough (e.g. no cluster fs required) setup,
decent performance, and it's still possible to control disk I/O
priority and bandwitdh.

--
Fajar


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Fajar A. Nugraha-3
On Mon, Aug 10, 2009 at 9:28 PM, Jia Rao<[hidden email]> wrote:
> dm-ioband seems to work with the new kernels (2.6.31). We are still using
> 2.6.18.8 and reluctant to upgrade to a new kernel.
> Can dm-ioband work with older kernels?

On the project page you'll see binary for RHEL/Centos5, which is a
2.6.18. So yes, it should work. Personally I just use Redhat's
kernel-xen (which works even for Xen >= 3.3, as long as you don't need
scsi passthru or other newer stuff) and the provided dm-ioband binary.

--
Fajar

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Ozgur Akan
In reply to this post by Dustin Black-3
Hi Jia,

Was there any specific purpose you choose GFS2 + clvmd over NFS ?

In my experience, like a year ago, GFS2 performed very poorly compared to EXT3 under high load and we had to rollback to EXT3. I am pretty sure that NFS would outperform GFS2 if VMs have high IO to the disk images.

Could you give any IOPS. throughput values you achive to the GFS2 file system with the current configuration?

thanks.,
OZ

On Fri, Aug 7, 2009 at 10:26 AM, Dustin Black <[hidden email]> wrote:
We are using iSCSI with CLVM and GFS2 very successfully with 10 physical Xen servers and 80 to 100 VMs running across them.  We use file-based disk images, all stored on a single GFS2 file system on a single iSCSI LUN accessible by all 10 Xen servers.


On Fri, Aug 7, 2009 at 10:11 AM, Jia Rao <[hidden email]> wrote:
Thank you very much for the prompt replies.

My intention of moving to iscsi is due to pure performance purpose.
The physical hosts and the storage server are connected through a 1G switch. The storage server uses raid-5 disk array.
My current testing using iozone within VMs on both iscsi and nfs produced similar performance results for sequential and random read.

I was told it will make a big difference if there are 10-15 VMs sharing the storage server. In my case, I have 8-10 VMs.

Any experience with a larger number of VMs in both nfs and iscsi?

Jia.


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

yueluck
thanks first.
i am research this solutoin,but none documents i have,
would you share you experence with me ,better give me a document of deploying.
thanks. my msn:  yue-luck@hotmail.com.
iscsi+clvm  resolve lots of things, like migration,shapshot,
iscsi+clvm+gfs2?
iscsi+gfs2?
share with me ,thanks
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Rudi Ahlers-2
In reply to this post by Dustin Black-3
On Fri, Aug 7, 2009 at 4:26 PM, Dustin Black <[hidden email]> wrote:
> We are using iSCSI with CLVM and GFS2 very successfully with 10 physical Xen
> servers and 80 to 100 VMs running across them.  We use file-based disk
> images, all stored on a single GFS2 file system on a single iSCSI LUN
> accessible by all 10 Xen servers.
>
>



What do you do if your iSCSI SAN breaks?


--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Juergen Gotteswinter
redundant switches
nic bonding

and you are fine...

Am 26.01.11 07:26, schrieb Rudi Ahlers:

> On Fri, Aug 7, 2009 at 4:26 PM, Dustin Black<[hidden email]>  wrote:
>> We are using iSCSI with CLVM and GFS2 very successfully with 10 physical Xen
>> servers and 80 to 100 VMs running across them.  We use file-based disk
>> images, all stored on a single GFS2 file system on a single iSCSI LUN
>> accessible by all 10 Xen servers.
>>
>>
>
>
>
> What do you do if your iSCSI SAN breaks?
>
>

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Rudi Ahlers-2
On Wed, Jan 26, 2011 at 10:20 AM, Juergen Gotteswinter <[hidden email]> wrote:
> redundant switches
> nic bonding
>


How, exactly, will redundant switches & NIC bonding help if the NAS
device fails. i.e. it's totally dead? Redundant switches & NIC bonding
will only help you if the network fails - which is much less likely
than the NAS to fail.


--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

RE: iscsi vs nfs for xen VMs

Zary Matej
Depends on quality of NAS/SAN device. Some of them are more reliable&robust that rest of the infrastructure (dual controllers, raid6, multipathing etc.), obviously they cost arm&leg. So they SHOULD not totally fail (firmware issues are  another thing though). And in that case, even if one owns enterprise grade storage, backups (tape, another storage, remote site) are always must. Yeah, if storage fails, there will be downtime. You can still have locals disks on xen host. So for example you can restore most important Xen guests on the local disks from backups and live without live migration until the NAS/SAN issues are solved.

Matej
________________________________________
From: [hidden email] [[hidden email]] On Behalf Of Rudi Ahlers [[hidden email]]
Sent: 26 January 2011 09:29
To: [hidden email]
Cc: Dustin Black; [hidden email]
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs

On Wed, Jan 26, 2011 at 10:20 AM, Juergen Gotteswinter <[hidden email]> wrote:
> redundant switches
> nic bonding
>


How, exactly, will redundant switches & NIC bonding help if the NAS
device fails. i.e. it's totally dead? Redundant switches & NIC bonding
will only help you if the network fails - which is much less likely
than the NAS to fail.


--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Rudi Ahlers-2
On Wed, Jan 26, 2011 at 10:44 AM, Matej Zary <[hidden email]> wrote:
> Depends on quality of NAS/SAN device. Some of them are more reliable&robust that rest of the infrastructure (dual controllers, raid6, multipathing etc.), obviously they cost arm&leg. So they SHOULD not totally fail (firmware issues are  another thing though). And in that case, even if one owns enterprise grade storage, backups (tape, another storage, remote site) are always must. Yeah, if storage fails, there will be downtime. You can still have locals disks on xen host. So for example you can restore most important Xen guests on the local disks from backups and live without live migration until the NAS/SAN issues are solved.
>
> Matej
> ________________________________________


Well, that's the problem. We have (had, soon to be returned) a so
called "enterprise SAN" with dual everything, but it failed miserably
during December and we ended up migrating everyone to a few older NAS
devices just to get the client's websites up again (VPS hosting). So,
just cause a SAN has dual PSU's, dual controllers, dual NIC's, dual
HEAD's, etc doesn't mean it's non-redundant.

I'm thinking of setting up 2 independent SAN's, of for that matter
even NAS clusters, and then doing something like RAID1 (mirror) on the
client nodes with the iSCSI mounts. But, I don't know if it's feasible
or worth the effort. Has anyone done something like this ?


--
Kind Regards
Rudi Ahlers
SoftDux

Website: http://www.SoftDux.com
Technical Blog: http://Blog.SoftDux.com
Office: 087 805 9573
Cell: 082 554 7532

_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: iscsi vs nfs for xen VMs

Juergen Gotteswinter
In reply to this post by Rudi Ahlers-2
we use storages with redundant controllers, network etc

for example equalogic, emc or hp

in really critical scenarios you can stack equalogic for example

no problems since years... even if a power supply went dead, or a
storage controller died everything keeped working

Am 26.01.11 09:29, schrieb Rudi Ahlers:

> On Wed, Jan 26, 2011 at 10:20 AM, Juergen Gotteswinter<[hidden email]>  wrote:
>> redundant switches
>> nic bonding
>>
>
>
> How, exactly, will redundant switches&  NIC bonding help if the NAS
> device fails. i.e. it's totally dead? Redundant switches&  NIC bonding
> will only help you if the network fails - which is much less likely
> than the NAS to fail.
>
>


_______________________________________________
Xen-users mailing list
[hidden email]
http://lists.xensource.com/xen-users
1234 ... 7
Loading...