I’ve been using an Iomega IX4-200D as a storage target for vSphere and have to say that for the most part it works well. I’ve used it both as an iSCSI target and as NFS storage.
You can and should expect it to suffer typical storage performance issues. It runs on 4 hard drives, mine in a RAID 5 array which is not the most performant, but best in case of disk failure and who wants to lose VMs. It’s still a limited set of spindles to work with and keeping that in mind will save you troubles down the road.
At one point, I had 17 VMs running on a single IX4-200D.
Still interested? Keep reading…
Start up was painful in this scenario. I could only start a few VMs at a time and have them come up in short order. I would generally have 3 to 4 starting with 1-2 minutes between VMs, let them load up, let them settle into run state so IOPs are lower, then start the next set. This worked pretty well. I did make the mistake once of trying to start up 8 or so VMs at the same time… big mistake (I ended up having to force reboots on the hosts). Once the VMs are up and running, performance is very manageable.
I ran all the typical Microsoft infrastructure products plus some: Active Directory, DNS, DHCP, SQL, SharePoint, software routers and firewalls, TMG, Exchange, and so on… Of course this was a lab environment and not production otherwise SQL Server would have been a real problem being as disk operation intensive as it is, but it ran just fine for lab operations.
I implemented vDR, which took multiple weeks to get operational consistently. The problem with it was also IOPs related. When I was backing up multiple VMs, again on the same set of spindles, vDR would eventually fail and often the hosts would lose connection to the NAS. When I used server local storage as the vDR target, things went much better. Since acquiring a 2nd IX4-200D, I’ve split off operations and things are far more stable.
With 2 IX4-200D’s running, as I mentioned in the previous paragraph, things are far more stable. I’ve split VM workload across each NAS, and am using vDR to back up from one to the other. Example: SQL is running on NAS1 and being backed up to NAS2, while SharePoint is running on NAS2 and being backed up to NAS1. I set a backup window for each NAS that gives plenty of time for the backups to complete, as well as at least an hour between backup windows to allow for caching writes to complete before putting load back on.
One of the best things about the Iomega NAS, besides the fact that it makes a decent ESX target and is VMware certified, is that you can find them inexpensively on eBay. I picked up the 8TB version last summer for about $1000 new, significant savings over retail, and I picked up the 4TB version for just over $300 manufacturer refurbished a few weeks ago. You may have to hunt around for a good price, but it’s definitely worth it.
If you’re running DNS as a virtual machine and need to bring your environment online from a down state, you will very likely need to remove the DNS entries from the Iomega. You can add them back later if you want, but apparently there is a feature (glitch) that causes ESX to not be able to mount the storage if the storage cannot resolve the DNS names of the ESX hosts. Keep this in mind when doing any kind of cold start. The issue is on the Iomega end, not the ESX end.
I hope this helps someone out. Let me know if there’s questions about other aspects of the Iomega.
Thanks for reading.