Remove iSCSI Drive Lock From Iomega IX4 Series NAS and vSphere Without Rebooting Either

000

Is your Iomega NAS iSCSI drive locked by vSphere and you don’t want to reboot?  This works with vSphere 4.1 (using ESXi but should be the same for ESX versions), and an Iomega StorCenter ix4-200d (firmware 2.1.38.22294).  If you encounter issues with another version, please let me know.

I’ve had some trouble off and on with iSCSI on my Iomega IX4 series NAS.  The trouble exists around removing the iSCSI target from vSphere but not releasing the hold on the NAS typically requiring a reboot of both the NAS and the host to remove the lock.  As I don’t like taking my host(s) or NAS down, through a little experimentation, I’ve come up with the steps necessary to remove the target allowing for the editing or deleting of the iSCSI drive on the NAS.  While something is connected to it, the IX4 will not allow editing or deletion.  This also works for removing a single target drive while vSphere is still pointing to other targets on the NAS.

Let’s get to it!

[ad#Google Adsense-1]

Step 1 – Move the VMs that are on the target iSCSI volume to a different datastore.

001

Step 2 – Navigate to Home > Inventory > Hosts and Clusters, and click on the configuration tab.  Select Storage, under the Hardware category.  Right-Click on the datastore that you want to remove, and select Delete.  At this point the datastore has been removed, but there is no data loss.

002

Step 3 – Still under the Hardware category, click on Storage Adapters.  In the right pane, click on Properties.  Delete the entry under Dynamic Discovery if there is one.  After removing what we need to remove, the HBA will need to be rescanned, and if Dynamic Discovery is still on with your SAN / NAS target, it’ll redetect the storage.  Under the Static Discovery tab, remove the entry for the storage that you want to remove.  The target name will be a long string of characters, but will end with what you named the datastore.  Make sure you remove the correct datastore.

003

004

Step 4 – After the target has been removed and you close the window, you will be prompted to rescan the HBA.  After doing so, you can see in the image below that the iSCSI target has been removed.

005

006

Step 5 – You must repeat the above steps for any hosts that you have set up to use the target.  Each host is managed independently at the storage adapter level.  In a cluster, the datastore will be removed from both automatically after removing it from the first.

Step 6 – Head back to the NAS management console and as you can see, you should now be able to edit or delete your iSCSI drive.  All without having to reboot ESX/vSphere or your NAS. Smile

007

11 thoughts on “Remove iSCSI Drive Lock From Iomega IX4 Series NAS and vSphere Without Rebooting Either”

  1. Hi Clement, how are you ?

    I’m thinking about use this storage on my infrastructure, especially in the secondary offices.

    I’ have about 5~10 user on this office, using basically File Server and a DB2 aplicattion. Do you think this storage will support my scenario ? ESXi + DB2 + this storage, do you think that i’ll have a aceptable performance ?

    if you could help me i’ll be very grateful.

    Tks for your attention !
    Jr.

    1. @Paul: How many vms are you planning to run? I also assume this is for production. I think it can definitely serve 5-10 users in a typical office setting with it being a file server, also running a DB2 database, and supporting VMs. I’m also assuming that the VMs and the database are also supporting a SOHO amount of users. I would also advise using iSCSI over NFS. I’ve experienced intermittent failures while having ESX access the server as NFS in that the other services drop out, like media services and file server services. This does not seem to happen when I use iSCSI mount points for VMFS. Can’t explain why, just an observed behavior and I’m on the latest firmware for the unit and drives. . I like the redundance features on the unit being a very affordable SOHO unit with multiple NICs.

      I would also advise looking into the Synology DS411+. It doesn’t have the redundant NIC but I’d imagine the MTBF (mean time between failures) for components on the unit is pretty huge and it offers significantly higher performance than the IX4 (at least from what I’ve read). There is a new line of Iomegas that are out now as well, but I haven’t seen any performance numbers on them, and they appear to be more expensive than the Synology units and QNAPs, but support SSDs. All things to think about.

      The IX4 is a dependable workhorse, but not a race horse. The Synology is a race horse, but I have no direct experience with it, and the new Iomegas may even top that one.

      Hope this helps.
      Clement

  2. Hi,
    Thanks for your useful articles, I already have an Iomega ix4-200d and have some problems with vsphere4, perhaps you already have it and knows how to sort it out.

    The device, on a daily basis, freeze and need to be restartarted. Is connected via a core switch to three vmware nodes and share a LUN via iScsi.

    Also I’m asking myself if your procedure for unlock the lun could be useful for that case. And I want to ask if it destroy the lun and idf not how after that procedure I can reconnect the lun.

    Thanks
    Mike

    1. @Mike. The procedure here will not destroy the LUN. You can reconnect the LUN by adding the dynamic discovery target back into ESX the same way you removed it. The only way that you would destroy the LUN is by doing it within the Iomega interface.

      In regards to losing connectivity daily, I have never had that problem and I push the unit pretty hard some times. You can try disabling services that you don’t need. Check out my post on that: http://d3planet.com/rtfb/2011/04/20/squeezing-maximum-performance-out-of-your-iomega-ix4-200d-as-a-vsphere-storage-target/.

      You can also try connecting via NFS instead of iSCSI, unless you just really want to use it as an iSCSI target. From what I’ve read, it even performs a little better, though marginally, than iSCSI but I have not observed that.

      Also, are you using the latest firmware for your device and the drives? How hard are you pushing the device? How long have you had it?

  3. Thanks for your comments.

    I have the unit from two months first tested with nfs and had the same problem, sometimes the nodes loose the connection to the store.
    For test pourposes I deployed only one VM.
    I will follw the other post to disable unused services.
    But now I can’t reconnect the lun (that has 2 TB), all the nodes are telling me the partition is new and should be reformatted.

  4. It may be an issue with the unit itself. I’ve had mine running iSCSI now for multiple weeks without a failure. Did you have to reformat? ESX should recognize the vmfs volume without having to reformat it unless something happened on the storage end.

    You may want to contact iomega support and see if they have any suggestions. Let me know how it goes.

  5. Finally the problem were the services on the device.

    Had Open a case on iomega and they told me to disable :
    -Media Server (Setting >Multimedia>Media Server)
    -Indexing (Settings>Search>Searching Settings)

    Now the device do only basics services (iscsi and nfs), is working from 4 days and yesterday I’ve done some tests moving some VM in and runt some update of the VM os.

    Mike

  6. Hi Clement,

    I’m curious how your continued experience with the ix4-200d has been. We purchased one three months ago for our test lab and it has frozen hard around once a month. vSphere 4.1, 2 hosts connected via iSCSI, max 5 VMs running at once – and realistically only one engineer using it at a time. All unnecessary services are disabled.

    Seems a bit unreliable to me. We’re not having much luck with iomega support – first two cases ended with “tell us if it happens again”. We’re on the third support case now and the engineer just asked us how many VMs we’re running on it as “it may be IO load related”! I’ve got plenty of storage experience and no storage – even a little SOHO unit – should freeze due to IO load. And ours definitely wasn’t under load.

    I’m hoping support can fix it but I’m doubtful. I’m planning on seeing if I can return it.

    What are your thoughts on the unit at this stage?

    Thanks,
    JK.

    1. @JK. Sounds like you’ve had a rough go of it. I’ve encountered a freeze similar to what you’re mentioning when I was using NFS, and it would happen around once a month as well. When I switched back to iSCSI, the problem completely disappeared, and that was months ago.

      Are you using jumbo frames? When I first picked up the unit, quite some time ago, I had problems with the unit with using jumbo frames. I can’t precisely remember what the issue was but it may be worth looking at. I also don’t know if the problems were related to my NICs, switches, or the unit itself, but the problems disappeared when I disabled it, and jumbo frames wasn’t buying me any perceivable performance gains. May be worth looking into if you are using jumbo frames.

      I’d follow your gut. If you’re feeling the unit is unreliable, then you should try to return it. There are other units out there around the same price point that seem to have better reviews, but I’m imagining that if you’re on your 3rd incident, you may be past the return window. I’m running 2 of the units with no problems, but it took a little trial and error to get them to ‘stable’.

      As for load, and keep in mind this is lab and not production, I’m running 3 hosts against 2 IX4s, with 11 VMs running right now on just 1 of the units and have gone without a hiccup since March when I had a HDD failure.

      Hope this helps. If you have any other questions or I can help with anything, let me know.

  7. Thanks for the advice Clement. Initially we were using Jumbo frames but we disabled that after the first crash. I had a look at your other posts about these units and we have tweaked that stuff as well.

    It really is a shame that these boxes are so bad. Being a subsidiary of EMC, and – from what I could tell from blogs – targeting the IT Pro lab market, I figured these would be rock solid. What a PITA.

    Thanks for your help.

    1. @JK. I agree. I’ve read a lot on the Internet with harsh reviews of the IX4’s in relation to being storage targets for VMware. Everything from poor performance to unstable, but I take them with a grain of salt because I’ve had pretty good experiences with the units I have running. Other units at the price point are missing some of the features that make this unit ‘special’ to me, like the dual-bonded NICs for example. You may want to see if Iomega will just send you out a new unit to test with. I’m also considering picking up a Synology DS 411+. The price point on them is higher and performance is supposed to be significantly higher, but with only a single network interface in the back of the box. Murphy’s Law, which seems to always hit in technology, states that the single point of failure will indeed fail, but it may be worth a shot.

      Good luck.

Leave a reply to Clement DeLarge Cancel reply