I’ve been using a few Iomega StorCenter IX4-200D’s as storage targets for vSphere 4 ESXi and have been a little disappointed at the performance. This disappointment really only comes when VMs are booting, I’m using Storage VMotion, or consolidating snapshots. For the price point, it’s still a great piece of equipment and I can honestly say that I’ve had 17+ active VMs running off the unit with no problem once the VMs are up and running. Keep in mind this is a lab implementation and those VMs weren’t under heavy utilization.
I’ll keep this post updated with anything else I come up with or that is submitted in comments.
Edited 10/23/2011 – No longer recommending use of NFS as a vSphere target on this unit.
Since I want to extend the life on these NAS’s, at least for the time being, I started looking at ways to get as much performance out of them as possible and here’s a few things I’ve done to squeeze all I could:
- Bond the NICs. The Iomega IX4-200D has 2 network connections available. If you bond them and set them to ‘Load Balance’, you may notice some performance improvements. Depending on your network, a bottleneck may not exist at the NICs but it won’t hurt to make sure you have as much network throughput as possible.
- Use iSCSI vs. NFS. After using NFS for over a year with this device, earlier this year I switched back to iSCSI. As noted by a commenter below, as well as the experience that I’ve had, vSphere hosts will eventually lose connection to the NAS. In addition to not being able to re-scan and pick the NAS back up, all other access to the NAS is also blocked with the exception of the HTTP interface for management. This problem appears to be a sporadic one, I cannot forcibly duplicate the issue, but it seems that it’s just a matter of time before the connection drops causing the VMs to fail and the NAS to require a reboot. This has happened at various points through both low and high utilization periods and on multiple units (2 to be exact- both running the latest firmware/software for both the units and the drives). Iomega support was unable to shed any light on the issue since it is sporadic and I’m unable to reproduce on call. Moral of the story from my perspective, use iSCSI. I have not encountered this problem a single time since using iSCSI and it has now been 5 months.
Use NFS vs. iSCSI. Whether it be how iSCSI is implemented on the NAS or for whatever other reason, I have noticed performance improvements by using NFS instead of iSCSI. NFS also makes it more convenient when a restart is needed or changes need to be made. iSCSI connections prevent the alteration of the iSCSI target, which I’ve found to be a pain. See: http://d3planet.com/rtfb/2011/01/04/remove-iscsi-drive-lock-from-iomega-ix4-series-nas-and-vsphere-without-rebooting-either/.
- Turn off services that are not needed. The Iomega IX4-200D comes with a multitude of services that consume resources on the unit and may or may not be in use. By turning off the services that you aren’t using, you can increase the resources that are available otherwise which includes processor, RAM, and I/O. The services that I disabled are: Media Services, Picture Services, Search, and Torrent, the Search service being the most impactful. (This information came straight from Iomega).
- Jumbo Frames. I’ve had mixed results with attempts to use Jumbo Frames. I had disconnects from the hosts to the NAS and at the time, did not have a solid baseline to compare against. Theoretically, jumbo frames should increase your performance however, and with some of the hardware changes I’ve made, better NICs and switches, it may be time to give it another try.
I haven’t compiled the hard numbers for these changes as they have occurred over time so I don’t have a single point of time for comparison. If / When I get a chance, I will revisit this post with some hard and fast numbers, or if a reader has the opportunity to test, please let me know and I will post your numbers along with giving credit to the tester.
I’m well aware of the fact that there are numerous other NAS units on the market that will serve as more performant and efficient vSphere targets so I ask that you keep in mind the context of this post before commenting on how another unit is so much better. This post is for people that are currently using the IX4-200D or can get their hands on one for a very good price (ie. an offer that they just can’t pass up.)
I hope this helps someone out and increases the usefulness / life of your investment.