I recently found a feature that allows you to quickly migrate the management network from a Distributed Virtual Switch (dvSwitch) to a vStandard Switch (vSwitch). It’s really simple actually. Just log on to a host physically or via iLO, not through the vSphere Client, and select “Restore Standard Switch”.
I recently set out to recreate my vCenter installation since I was still running on Windows Server 2003 R2 64-bit and wanted to the set it up on Windows Server 2008 R2. The problem here was that my Management Network was attached to a dvSwitch (Distributed Virtual Switch). I’ll briefly outline the process of how I removed each host (3 hosts in total) from vCenter, attaching the host to the brand new vCenter installation with only about 10 minutes total virtual machine (VM) downtime. This can actually be done with no downtime if planned properly and aware of the possible hiccups.
The new environment is now up and running, and after refining the process (poking around a lot), it only takes about 10 minutes to move each host. This was done on vSphere 5 (moving from vSphere 5 to Update 1).
I’ve been running a vSphere lab of my since ESX 2.x. Over the years, I’ve used both local and NAS based storage with varying degrees of satisfaction with the results. In the case of NAS storage, which is required since I can’t afford a SAN, I looked at Synology devices over the last year trying to gain the motivation to make the investment.
Needless to say, I dove in with both feet, and maxed out a Synology DS1511+ with 3TB drives. I purchased my Synology DS1511+ from SimplyNas with the drives included, including their burn-in testing, and I haven’t looked back. The device has been up and running since October 2011.
VMware is distributing a limited usage vCloud Director virtual appliance to facilitate and support evaluation of the product. I wanted to stand it up in my lab as a test-bed and to get to know the product better, but after checking into it, it’s not just the eval licenses that will expire. The http certificates will also expire within 60 days of the certificates being generated since it uses the Java ‘keytool’ utility and it’s configured to. As a VMware partner and I have access to licenses to extend the life of the appliance but due to my environment, I cannot work with expired certificates.
I’ve been using a few Iomega StorCenter IX4-200D’s as storage targets for vSphere 4 ESXi and have been a little disappointed at the performance. This disappointment really only comes when VMs are booting, I’m using Storage VMotion, or consolidating snapshots. For the price point, it’s still a great piece of equipment and I can honestly say that I’ve had 17+ active VMs running off the unit with no problem once the VMs are up and running. Keep in mind this is a lab implementation and those VMs weren’t under heavy utilization.
I’ll keep this post updated with anything else I come up with or that is submitted in comments.
Edited 10/23/2011 – No longer recommending use of NFS as a vSphere target on this unit.
I’ve been using an Iomega IX4-200D as a storage target for vSphere and have to say that for the most part it works well. I’ve used it both as an iSCSI target and as NFS storage.
You can and should expect it to suffer typical storage performance issues. It runs on 4 hard drives, mine in a RAID 5 array which is not the most performant, but best in case of disk failure and who wants to lose VMs. It’s still a limited set of spindles to work with and keeping that in mind will save you troubles down the road.
At one point, I had 17 VMs running on a single IX4-200D.
Still interested? Keep reading…
Is your Iomega NAS iSCSI drive locked by vSphere and you don’t want to reboot? This works with vSphere 4.1 (using ESXi but should be the same for ESX versions), and an Iomega StorCenter ix4-200d (firmware 184.108.40.20694). If you encounter issues with another version, please let me know.
I’ve had some trouble off and on with iSCSI on my Iomega IX4 series NAS. The trouble exists around removing the iSCSI target from vSphere but not releasing the hold on the NAS typically requiring a reboot of both the NAS and the host to remove the lock. As I don’t like taking my host(s) or NAS down, through a little experimentation, I’ve come up with the steps necessary to remove the target allowing for the editing or deleting of the iSCSI drive on the NAS. While something is connected to it, the IX4 will not allow editing or deletion. This also works for removing a single target drive while vSphere is still pointing to other targets on the NAS.
Let’s get to it!
I decided to sign up for Microsoft Connect and download the Windows Home Server “Vail” public preview and install it in my lab vSphere lab. I recently picked up an Iomega ix4-200 and had some extra space so I wanted to try to streaming media and backup functionality.
After Windows (Server 2008 R2) installed, and the WHS configuration wizard started running, it would error out at 36% or 37% consistently. It would post an error, and instruct me to reboot and contact the vendor if the error continued. Hmm… contacting Microsoft about a beta, yehrite, and I’m impatient so I didn’t feel like posting on the forums and trying to wait for a response what would likely be a dance of posting log files etc. I so I went into reinstall / reboot hell.
After numerous reboots, rebuilds, and a successful VMware Workstation deployment (yes I actually wondered if Microsoft put something in the bits to keep it from being installed on VMware… hahah), I tracked down the issue. In the installation guide it says to use a hard drive that has a minimum of 160GB of space. I made a drive that had exactly 160GB. This was the problem. The successful workstation VM I created had a hard drive of 165GB. I went back and increased the size of the vSphere VM to 165GB and voila! Success. Hopefully this saves someone some time and trouble with virtualizing WHS “Vail”.
This applies to virtual switches that have already been created.
I was trying to do this earlier this evening and found a few articles that talked about various methods to enable jumbo frame support on a vSwitch. After reading some of the ‘hacks’ that are being used, I decided to dig into PowerCLI. Amazingly enough, the solution is so simple that maybe it’ll get some of the people working with vSphere to move into PowerCLI further. Here’s the 30 second or less solution to the issue. As I wrote above, this applies for a vSwitch that’s already been created, but you can create a vSwitch with all the specifications you need from PowerCLI as well just the New-VirtualSwitch commandlet.
> $vs = Get-VirtualSwitch –name vSwitchX
> Set-VirtualSwitch –VirtualSwitch $vs –mtu 9000
> Get-VirtualSwitch –name vSwitchX
If you’re not familiar with PowerShell, get familiar with it. 🙂 It’s an excellent product and is expandable so many IT products are moving toward a PowerShell interface for its ease of use.