Using the Iomega IX4-200D as a Storage Target for vSphere (ESX and ESXi) Lessons Learned

StorCenter_ix4_200d_hi_328x188

I’ve been using an Iomega IX4-200D as a storage target for vSphere and have to say that for the most part it works well.  I’ve used it both as an iSCSI target and as NFS storage.

You can and should expect it to suffer typical storage performance issues.  It runs on 4 hard drives, mine in a RAID 5 array which is not the most performant, but best in case of disk failure and who wants to lose VMs.  It’s still a limited set of spindles to work with and keeping that in mind will save you troubles down the road.

At one point, I had 17 VMs running on a single IX4-200D.

Still interested?  Keep reading…

General Info

Start up was painful in this scenario.  I could only start a few VMs at a time and have them come up in short order.  I would generally have 3 to 4 starting with 1-2 minutes between VMs, let them load up, let them settle into run state so IOPs are lower, then start the next set.  This worked pretty well.  I did make the mistake once of trying to start up 8 or so VMs at the same time… big mistake (I ended up having to force reboots on the hosts).  Once the VMs are up and running, performance is very manageable.

I ran all the typical Microsoft infrastructure products plus some: Active Directory, DNS, DHCP, SQL, SharePoint, software routers and firewalls, TMG, Exchange, and so on…  Of course this was a lab environment and not production otherwise SQL Server would have been a real problem being as disk operation intensive as it is, but it ran just fine for lab operations.

[ad#Google Adsense-1]

vDR

I implemented vDR, which took multiple weeks to get operational consistently.  The problem with it was also IOPs related.  When I was backing up multiple VMs, again on the same set of spindles, vDR would eventually fail and often the hosts would lose connection to the NAS.  When I used server local storage as the vDR target, things went much better.  Since acquiring a 2nd IX4-200D, I’ve split off operations and things are far more stable.

Configuration

With 2 IX4-200D’s running, as I mentioned in the previous paragraph, things are far more stable.  I’ve split VM workload across each NAS, and am using vDR to back up from one to the other.  Example:  SQL is running on NAS1 and being backed up to NAS2, while SharePoint is running on NAS2 and being backed up to NAS1.  I set a backup window for each NAS that gives plenty of time for the backups to complete, as well as at least an hour between backup windows to allow for caching writes to complete before putting load back on.

Pricing

One of the best things about the Iomega NAS, besides the fact that it makes a decent ESX target and is VMware certified, is that you can find them inexpensively on eBay.  I picked up the 8TB version last summer for about $1000 new, significant savings over retail, and I picked up the 4TB version for just over $300 manufacturer refurbished a few weeks ago.  You may have to hunt around for a good price, but it’s definitely worth it.

Troubleshooting Tip

If you’re running DNS as a virtual machine and need to bring your environment online from a down state, you will very likely need to remove the DNS entries from the Iomega.  You can add them back later if you want, but apparently there is a feature (glitch) that causes ESX to not be able to mount the storage if the storage cannot resolve the DNS names of the ESX hosts.  Keep this in mind when doing any kind of cold start.  The issue is on the Iomega end, not the ESX end.

I hope this helps someone out.  Let me know if there’s questions about other aspects of the Iomega.

Thanks for reading.

[ad#Google Adsense-1]

Advertisements

3 thoughts on “Using the Iomega IX4-200D as a Storage Target for vSphere (ESX and ESXi) Lessons Learned”

  1. I’m having problem with the Device Latency…
    With 10 VM’s, the time gets near to 1000ms
    And my VMWare warns that Non-VI Workload on this storage…
    Do you have the same problem?

    1. @Gustavo, I have not had that problem. Are you running VMs that are excessively demanding? Are you running the latest firmware on the NAS? Is there excessive traffic on the network? What other services are running on the NAS? Those are the things that I would look at initially. If your switch is experiencing a lot of traffic, can you segment your network or dedicate a switch to that traffic? Knowing what your architecture is can help. Let me know.

      Clement

  2. Clement,

    I use a combination of Windows 2008 R2 with their “now free” iSCSI target software to get my vMotion fix… 🙂

    Setup can be a little bit painful, but once you figure it out it works like a charm. Saves the expense of buying physical iSCSI/NAS, and it allows you to add massive storage easily.

    Basically it takes your drive and mounts a VHD disk. The VHD acts as your iSCSI LUN. 2008 R2 locks the VHD file for remote usage, so your performance is pretty decent. I can easily run multiple VM’s on a single VHD file, and I can also extend or contract the disk, if desired.

    For my VM’s, I back them up using VDR, so if the disk goes PIFF, I simply do a restore. All non-critical virtuals, but its nice if and when something goes wrong that vDR comes to the rescue!

    It also has the capability of making a snapshot as well, so you can go bonkers if your data needs multiple backups.

    Good article, if your interested.

    http://www.techrepublic.com/blog/networking/microsoft-iscsi-software-target-33-for-windows-server-2008-r2-iscsi-san-for-the-masses/3936

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s