Squeezing Maximum Performance Out Of Your Iomega IX4-200D as a vSphere Storage Target

I’ve been using a few Iomega StorCenter IX4-200D’s as storage targets for vSphere 4 ESXi and have been a little disappointed at the performance.  This disappointment really only comes when VMs are booting, I’m using Storage VMotion, or consolidating snapshots.  For the price point, it’s still a great piece of equipment and I can honestly say that I’ve had 17+ active VMs running off the unit with no problem once the VMs are up and running.  Keep in mind this is a lab implementation and those VMs weren’t under heavy utilization.

I’ll keep this post updated with anything else I come up with or that is submitted in comments.

Edited 10/23/2011 – No longer recommending use of NFS as a vSphere target on this unit.

Since I want to extend the life on these NAS’s, at least for the time being, I started looking at ways to get as much performance out of them as possible and here’s a few things I’ve done to squeeze all I could:

[ad#Google Adsense-1]

  • Bond the NICs.  The Iomega IX4-200D has 2 network connections available.  If you bond them and set them to ‘Load Balance’, you may notice some performance improvements.  Depending on your network, a bottleneck may not exist at the NICs but it won’t hurt to make sure you have as much network throughput as possible.
  • Use iSCSI vs. NFS.  After using NFS for over a year with this device, earlier this year I switched back to iSCSI.  As noted by a commenter below, as well as the experience that I’ve had, vSphere hosts will eventually lose connection to the NAS.  In addition to not being able to re-scan and pick the NAS back up, all other access to the NAS is also blocked with the exception of the HTTP interface for management.  This problem appears to be a sporadic one, I cannot forcibly duplicate the issue, but it seems that it’s just a matter of time before the connection drops causing the VMs to fail and the NAS to require a reboot.  This has happened at various points through both low and high utilization periods and on multiple units (2 to be exact- both running the latest firmware/software for both the units and the drives).  Iomega support was unable to shed any light on the issue since it is sporadic and I’m unable to reproduce on call.  Moral of the story from my perspective, use iSCSI.  I have not encountered this problem a single time since using iSCSI and it has now been 5 months.
  • Use NFS vs. iSCSI.  Whether it be how iSCSI is implemented on the NAS or for whatever other reason, I have noticed performance improvements by using NFS instead of iSCSI.  NFS also makes it more convenient when a restart is needed or changes need to be made.  iSCSI connections prevent the alteration of the iSCSI target, which I’ve found to be a pain.  See: http://d3planet.com/rtfb/2011/01/04/remove-iscsi-drive-lock-from-iomega-ix4-series-nas-and-vsphere-without-rebooting-either/.
  • Turn off services that are not needed.  The Iomega IX4-200D comes with a multitude of services that consume resources on the unit and may or may not be in use.  By turning off the services that you aren’t using, you can increase the resources that are available otherwise which includes processor, RAM, and I/O.  The services that I disabled are: Media Services, Picture Services, Search, and Torrent, the Search service being the most impactful. (This information came straight from Iomega).
  • Jumbo Frames.  I’ve had mixed results with attempts to use Jumbo Frames.  I had disconnects from the hosts to the NAS and at the time, did not have a solid baseline to compare against.  Theoretically, jumbo frames should increase your performance however, and with some of the hardware changes I’ve made, better NICs and switches, it may be time to give it another try.

I haven’t compiled the hard numbers for these changes as they have occurred over time so I don’t have a single point of time for comparison.  If / When I get a chance, I will revisit this post with some hard and fast numbers, or if a reader has the opportunity to test, please let me know and I will post your numbers along with giving credit to the tester. Smile

I’m well aware of the fact that there are numerous other NAS units on the market that will serve as more performant and efficient vSphere targets so I ask that you keep in mind the context of this post before commenting on how another unit is so much better.  This post is for people that are currently using the IX4-200D or can get their hands on one for a very good price (ie. an offer that they just can’t pass up.)

I hope this helps someone out and increases the usefulness / life of your investment.

Advertisements

7 thoughts on “Squeezing Maximum Performance Out Of Your Iomega IX4-200D as a vSphere Storage Target”

  1. I too have been using this unit as an iSCSI target for vSphere 4.1 as well as using NFS for vdisks (it’s a vanilla server, but well powered thanks to an Intel i7 920, 18GB of DDR3 1333 RAM, and an Intel SRCSAS18E SAS/SATA Raid controller with 256MB BBU WB Cache) and I can’t say I’ve done any reliable performance testing, but every 2 weeks – 2 months, my host looses connection long enough to cause my guests to fail when they rely on NFS. Following the array is dirtied and must rebuild. So far I’m disappointed at what “Support” has had to say and recently I got a notification saying that the system 3.3v rail was only something like 1.6v so I’m going to try to get the unit swapped in hopes that I’ve just got bad hardware.

    How are you testing performance? I’d love to compare notes since I’ve got some decent network equipment (Intel PCIe 1Gbe NICs, managed switch, CAT6, 9k frames, etc..)

  2. @Rainabba: Since this post, I’ve actually gone full-time into iSCSI. I experienced a problem similar to what you experienced with an intermittent connection failure while using iSCSI. I have not been able to determine what causes the problem, whether it be utilization or what, and I can’t forcibly repeat the failure. Just appears to be random. This happened on two different arrays so I would guess that something that the hosts are doing causes the NFS failure. This has not occured since I switched back to iSCSI.

  3. I’ve updated the post with a problem that I’ve experienced using NFS on this unit that Rainabba also experienced. Hope this helps.

    @Rainabba: Thanks for commenting on this article with your experience. I had forgotten to update this when I switched everything over to iSCSI and didn’t have the issue any longer.

  4. Thanks for your research. A bit late, but I have no idea where to find the search service on the Iomega. Can you please explain?

  5. What firmware are you using here? I’m a new owner of a IX4-200D and I don’t feel like these steps maximize performance anymore…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s