Move a Linked Clone View Pool to a Different Cluster with No Downtime

Happy new year all! I figured I would start off the year with a procedure I’ve used successfully to migrate View pools from one cluster to another, with completely disparate storage. One official answer from VMware when you start searching on this is a rebalance.

Migrating Linked clone pools to a different or new Datastore (1028754)

But what does a rebalance really do again? 

The best quick definition comes from the the pubs:

A desktop rebalance operation evenly redistributes linked-clone desktops among available datastores. If possible, schedule rebalance operations during off-peak hours.

So that helps us if we add new datastores to an existing cluster, but it doesn’t really help us if we need to move our stateless View environment onto new servers and storage altogether. I used this procedure with a customer recently and it worked great. I’m sure there are many other minor variations on how to do this, but this method was done purely with native commands in the View admin interface, no PowerCLI scripting required.

Here are a few important notes to keep in mind:

  • This method assumes that the linked clone desktops are completely stateless and can be refreshed without impacting end user functionality or data.
  • The new cluster must use the same vCenter server as the original pool, as Moving View-managed desktops between vCenter Servers is not supported (1018045).
  • During my testing on View 5.1.1 744999, I experienced that if you simply changed the pool configuration so that the new cluster and storage are selected, then attempt to perform a rebalance, linked clone desktops will start getting moved over before a replica is created on the new cluster, resulting in some VMs that can power on, but have no OS. This may have been a fluke during testing, but seeing it once was enough to put a bad taste in my mouth for that method.
  • If you are migrating from one cluster to another that has even slightly different build versions of ESXi, you may need a gold image update for the VMware tools and to ensure users don’t receive any unexpected hardware messages, specifically with XP. The best way I found to do this is to clone the existing gold image over to the new cluster and start a new gold image for the pool. Remember that when you clone a VM, all the snapshots will be nuked on the newly created copy of the VM, so be prepared to lose your snapshot tree if this is the case.

With those two points in mind, here is a nice visual “visio’d whiteboard session” with steps to follow that worked for me. It’s nothing really advanced, and it’s just nice to see it laid out in a manner that makes sense.



Remember that before starting step three, you should have confirmed whether or not your golden image will require an update once located on the new cluster. You should also document the pool provisioning settings before starting so that they can be set back to the original parameters when finished. With step 4, the point is, you need the pool to provision at least a single new VM. So bump up your provisioning numbers or delete existing provisioned VMs to make that happen.





Hope this helps you fellow View administrators! All feedback, and constructive criticism are welcomed here at, as I only care about having the correct information. See you all around and about and have a happy 2014!

How to reset administrator@vsphere.local password in vCenter 5.5

  1. Open command prompt as administrator on the SSO 5.5 server
  2. cd /d “C:\Program Files\VMware\Infrastructure\VMware\CIS\vmdird”
  3. vdcadmintool.exe
  4. Press 3 to enter the username
  5. Type: cn=Administrator,cn=users,dc=vSphere,dc=local
  6. Hit enter
  7. New password is displayed on screen
  8. Press 0 to exit

Pretty easy eh?



You can get to the same utility on the VCSA by logging in with root at the console and launching the following command: 


Follow the same process listed above to generate a new password. 

Wireshark Network Capture Any vSwitch Traffic ESXi 5.5

So before ESXi 5.5 if we wanted to perform a packet capture natively from the ESXi box using standard switches we could only use the tcpdump-uw tool, which only allowed us to capture traffic on individual VMkernel ports. Not very useful if there’s a problem on a standard vSwitch that only has virtual machine traffic on it. With dvSwitch, we can do port mirroring and netflow, but that left the sad little standard switch outside in the bitter cold when it came time to troubleshoot non-vmkernel traffic.

Good news space heroes, released with 5.5 is the newly minted pktcap-uw tool which allows you to specify any uplink for packet capture.

For example, to capture 100 packets on on vmnic1 and output it as a pcap file into /tmp:

pktcap-uw –uplink vmnic1 –outfile /tmp/vmnic1cap.pcap -P -c 100


From here we can use a WinSCP or other file transfer to download the pcap file and load it into Wireshark.

pktcap-uw -h yields the help message with all usage options, including source and destination IP/mac filters.

Packet Capture and Trace command usage:
        == Create session to capture packets ==
         pktcap-uw [--capture <capture point> | [--dir <0/1>] [--stage <0/1>]]
            [--switchport <PortID> | --vmk <vmknic> | --uplink <vmnic> |
               --dvfilter <filter name>]
               –lifID <lif id for vdr>]
            [-f [module name.]<function name>]
            [-AFhP] [-p|--port <Socket PORT>]
            [-c|--count <number>] [-s|--snapLen <length>]
            [-o|--outfile <FILE>] [--console]
            [Flow filter options]
        == Create session to trace packets path ==
         pktcap-uw –trace
            [-AFhP] [-p|--port <Socket PORT>]
            [-c|--count <number>] [-s|--snapLen <length>]
            [-o|--outfile <FILE>] [--console]
            [Flow filter options]
The command options:
        -p, –port <Socket PORT>
                Specify the port number of vsocket server.
        -o, –outfile <FILE>
                Specify the file name to dump the packets. If unset,
                output to console by default
        -P, –ng   (only working with ‘-o’)
                Using the pcapng format to dump into the file.
        –console  (by default if without ‘-o’)
                Output the captured packet info to console.
        -s, –snaplen <length>
                Only capture the first <length> packet buffer.
        -c, –count <NUMBER>
                How many count packets to capture.
                Print this help.
        -A, –availpoints
                List all capture points supported.
                List all dynamic capture point functions supported.
        –capture <capture point>
                Specify the capture point. Use ‘-A’ to get the list.
                If not specified, will select the capture point
                by –dir and –stage setting
The switch port options:
(for Port, Uplink and Etherswitch related capture points)
        –switchport <port ID>
                Specify the switch port by ID
        –lifID <lif ID>
                Specify the logical interface id of VDR port
        –vmk <vmk NIC>
                Specify the switch port by vmk NIC
        –uplink <vmnic>
                Specify the switch port by vmnic
The capture point auto selection options without –capture:
        –dir <0|1>  (for –switchport, –vmk, –uplink)
                The direction of flow: 0- Rx (Default), 1- Tx
        –stage <0|1>  (for –switchport, –vmk, –uplink, –dvfilter)
                The stage at which to capture: 0- Pre: before, 1- Post:After
The capture point options
        -f [module name.]<function name>
                The function name. The Default module name is ‘vmkernel’.
                (for ‘Dynamic’, ‘IOChain’ and ‘TcpipDispatch’ capture points)
        –dvfilter <filter name>
                Specify the dvfilter name for DVFilter related points
Flow filter options, it will be applied when set:
        –srcmac <xx:xx:xx:xx:xx>
                The Ethernet source MAC address.
        –dstmac <xx:xx:xx:xx:xx>
                The Ethernet destination MAC address.
        –mac <xx:xx:xx:xx:xx>
                The Ethernet MAC address(src or dst).
        –ethtype 0x<ETHTYPE>
                The Ethernet type. HEX format.
        –vlan <VLANID>
                The Ethernet VLAN ID.
        –srcip <x.x.x.x[/<range>]>
                The source IP address.
        –dstip <x.x.x.x[/<range>]>
                The destination IP address.
        –ip <x.x.x.x>
                The IP address(src or dst).
        –proto 0x<IPPROTYPE>
                The IP protocol.
        –srcport <SRCPORT>
                The TCP source port.
        –dstport <DSTPORT>
                The TCP destination port.
        –tcpport <PORT>
                The TCP port(src or dst).
        –vxlan <vxlan id>
                The vxlan id of flow.

For more information, this KB has information.

How replace a C7000 Virtual Connect VC1/10 with a Flex-10 with VC Defined MAC & WWNs

So I recently had to upgrade a single C7000 enclosure from two VC1/10 modules with two 20 port 8Gb fiber cards to two Flex-10 modules with the two 20 port 8Gb fiber cards. We also wanted to upgrade the VC modules to 4.10. I was able to successfully by following this runbook. In this case, the customer had VC defined MAC and WWNs and wanted to ensure that they all stayed the same when we re-created the server profiles, see the note at the bottom in regards to that. This procedure also required an enclosure downtime for all blades in the enclosure we worked on.


1. Document via screen shot all server profiles, including multiple VLAN configurations

** Without this you will be HOSERED during the server profile rebuild. If booting from SAN make sure to get the WWNs and LUNs for the boot nodes.

2. Document via screen shot all assigned WWN and MAC addresses, including HP Pre-defined Ranges
3. Design new uplink configurations, active/active, active/passive, ensure free ports, etc.
4. Ensure a backup of the VC configuration is taken

- In VC Manager: Domain Settings > Backup/Restore
- Backup configuration
- Done

Downtime GO:

1. Ensure all Blades are powered off
2. Upgrade OA to 4.01

- Browse to OA interface
- Enclosure Information > Active Onboard Administrator > Firmware Update
- Upload the appropriate BIN file
- Ensure successful login

3. Run VC Healthcheck and document states

- vcsu -a healthcheck -i 10.X.X.X -u OAadministrator -p OApassword
- Document via screen shot health state

4. Unassign all server profiles from all servers
5. Delete the virtual connect domain

- In VC Manager
- Domain Settings > Configuration
- Delete Domain button
- Enter domain name
- wait 180 seconds

6. Power down modules from OA
7. Remove VC1/10 Modules from interconnect bays 1 & 2
8. Populate Flex-10 Modules in interconnect bays 1 & 2
9. Initialize Flex-10 Modules, assign them interconnect bay IP addresses in the OA
10. Navigate to the VC Module and begin the domain wizard setup

- Import the enclosure, choose new VC domain
- Name the VC domain
- Finish

11. Initiate the Networking Wizard

- Use VC defined MAC addresses, ensure that HP Predefined 1 sets are chosen
- Do not create any networks at the present Time
- Finish Wizard

12. Initiate the FC Wizard

- Use VC defined WWNs, ensure that HP Predefined 1 sets are chosen
- Define Fabric, Finish Wizard

13. Define networks, Define fabrics
14. Create and apply server profiles
15. Power server and Test Connectivity
16. Upgrade VC to 4.10

- vcsu -a update -i 10.X.X.X -u OAadministrator -p OApassword -l C:\vcfwall410.bin -vcu VCAdministrator -vcp VCpassword

- Ensure that any dependent modules (FC cards) are updated to match

- Wait for completion

17. Upgrade server firmwares

1. Mount the HP_Service_Pack_for_Proliant_2013.09.0-0_744345-001_spp_2013.09.0-SPP2013090.2013_0830.30.iso to the blade, boot to it and run the firmware update

18. Boot blades and ensure connectivity

** On step 14, we had to make dummy profiles that filled in MAC addresses and WWNs for profiles that had been deleted in-between the existing ones. You wouldn’t have to worry about this as long as you re-made the server profiles in the exact same order as they were originally made. Don’t worry if you accidentally get it out of order, just removing the NIC/HBA from the server profile will put the incorrectly used MAC/WWN back into the pool and will be used next.

** As a side note, HP doesn’t ever recommend using HP Pre-defined set #1, because its ranges can get stomped on if someone sets up another enclosure on #1 without checking first. In this case we didn’t have a choice.

pfSense LAN Party QoS 1.3 – Individually Limited TCP Streams

I recently posted the some updated config files for the pfSense QoS box.  For those of you who want to read the old post, here is the permalink.

Here are the updated config files:

Last time we had a LAN party, we had a small problem. The problem was that folks who legitimately needed HTTP for things like signing into Steam and the TF2 Item Server, were fighting over an HTTP queue that was way full and dropping thousands of packets per second, mostly because a few people had very large download streams going. So I am going to try to solve contention in the subprime queue with a TCP traffic limiter on each IP address that gets a DHCP address. I made 2 rules on the LAN interface for every IP in the range If you don’t like that IP range, well then it should be easy enough for you to do a find and replace on the firewall rule config download.

Rule #1 (Disabled by Default): Sad Panda Penalty Box: Limits all traffic from that particular IP address to 200Kbps/100Kbps

Rule #2 TCP Download Ceiling: Limits all TCP traffic from that particular IP address to 500Kbps/250Kbps


With these 410 different rules in place, you ensure that anyone who gets a DHCP address is at individually limited to .5Mbps TCP download speed, which can be controlled with the limiter config. Also, you now have a method of identifying who is sucking up all the bandwidth, and putting them in the bandwidth naughty corner. Check out the embedded Vimeo demonstration for a better look at what’s up.