Upgrading to Horizon View 6 – Part 2 – View Composer

Upgrading to Horizon View 6 – Part 1 – Prepwork

Upgrading to Horizon View 6 – Part 2 – View Composer

Upgrading to Horizon View 6 – Part 3 – Connection Servers

Upgrading to Horizon View 6 – Part 4 – Security Servers

Upgrading to Horizon View 6 – Part 5 – Desktops

Let’s remember from Part one that we first need to do these steps.

Prepare for View Composer Upgrade (As in you’re about to perform the upgrade)

  • Verify physical requirements are met on the vCenter or standalone machine (Standalone, 2 vCPU 8GB RAM, 60GB Disk, usually recommended, 4 vCPU 16GB memory for vCenter co-located composer)
  • Disable provisioning on all the linked clone pools
  • If any pools are set to refresh on logoff, change them to Never.
  • Grab a copy of all the SSL certs ( at %ALLUSERSPROFILE%\Application Data\VMware\VMware VirtualCenter)
  • Stop the View Composer service and backup the View Composer Database
  • Stop the vCenter service and backup the vCenter Databases
  • Snapshot the vCenter and/or standalone Composer VM(s)
  • Start the vCenter service
  • Start the View Composer service

Once we have a full backup plan, let’s upgrade View composer.

Remember to budget about 30 minutes of downtime and make sure that all your provisioning and automatic refresh actions are turned off.

I recommend the automatic upgrade of the View Composer database if possible. If you need to migrate View Composer to a different virtual, or change from a co-located Composer to a standalone, make sure to follow the upgrade guide to the letter. That’s outside the scope of this post, since I don’t have to do that in my lab.


Start the View Composer Upgrade by running the EXE we downloaded already (make sure to run as administrator if you have UAC on). And now, WIZARRRD PICTURES!


ViewComposer01 ViewComposer02 ViewComposer03 ViewComposer04

If you use integrated (AD) authentication instead of SQL auth, you will need to make sure you are logged in as the user who has access to the database.

I re-used the existing certificate. ViewComposer06 ViewComposer07 ViewComposer08 ViewComposer09 ViewComposer10 ViewComposer11 ViewComposer12 ViewComposer13


And completed! Next up connection servers.


How to delete the vsanDatastore for VMware Virtual SAN

There are a lot of blog posts on how to set up the cool new Virtual SAN, but not a lot on how to destroy the Virtual SAN object. Probably because it’s really simple. Never fear! In a shameless effort to drive hits from google, here is your step by step on how to delete the vsanDatastore.

Deleting the vsanDatastore means that all the data on the underlying disk groups will be nuked. Gone. Vamoosed. This means you must either migrate the VMs to separate available local storate or to another storage solution. You can delete a disk group from one of the nodes, then reformat the disks into VMFS volumes, but you will obviously lose fault tolerance when you move to that. But hey, you’re the one deleting the virtual SAN object, not me. You should know why you’re doing it and what you’re moving to. I had to do this because I needed to nuke and refresh my cluster on 5.5. U1 from the beta refresh.

1. Migrate all the VMs and templates elsewhere. Where that is, well that is your problem. You can verify if there are any objects remaining by examining the related datastore object here.
Step12. Turn off HA on the Virtual SAN enabled cluster. You’ll need this to be off to turn off the Virtual SAN feature.
3. Delete all the disk groups individually. We will get warned that we should put it in maintenance mode first if we’re performing maintenance, but since we’re nuking the sucker we don’t want to do that.
3. We can see the local disks become available in the add storage wizard for each host as we delete the disk groups.


4. Repeat the process until all the disk groups have been deleted.
5. We are now ready to turn off the VSAN Feature on the cluster object.


6. Click OK on the warning.


7. Re-enable HA on the cluster. All done!



How to reset administrator@vsphere.local password in vCenter 5.5

  1. Open command prompt as administrator on the SSO 5.5 server
  2. cd /d “C:\Program Files\VMware\Infrastructure\VMware\CIS\vmdird”
  3. vdcadmintool.exe
  4. Press 3 to enter the username
  5. Type: cn=Administrator,cn=users,dc=vSphere,dc=local
  6. Hit enter
  7. New password is displayed on screen
  8. Press 0 to exit

Pretty easy eh?



You can get to the same utility on the VCSA by logging in with root at the console and launching the following command: 


Follow the same process listed above to generate a new password. 

Wireshark Network Capture Any vSwitch Traffic ESXi 5.5

So before ESXi 5.5 if we wanted to perform a packet capture natively from the ESXi box using standard switches we could only use the tcpdump-uw tool, which only allowed us to capture traffic on individual VMkernel ports. Not very useful if there’s a problem on a standard vSwitch that only has virtual machine traffic on it. With dvSwitch, we can do port mirroring and netflow, but that left the sad little standard switch outside in the bitter cold when it came time to troubleshoot non-vmkernel traffic.

Good news space heroes, released with 5.5 is the newly minted pktcap-uw tool which allows you to specify any uplink for packet capture.

For example, to capture 100 packets on on vmnic1 and output it as a pcap file into /tmp:

pktcap-uw –uplink vmnic1 –outfile /tmp/vmnic1cap.pcap -P -c 100


From here we can use a WinSCP or other file transfer to download the pcap file and load it into Wireshark.

pktcap-uw -h yields the help message with all usage options, including source and destination IP/mac filters.

Packet Capture and Trace command usage:
        == Create session to capture packets ==
         pktcap-uw [–capture <capture point> | [–dir <0/1>] [–stage <0/1>]]
            [–switchport <PortID> | –vmk <vmknic> | –uplink <vmnic> |
               –dvfilter <filter name>]
               –lifID <lif id for vdr>]
            [-f [module name.]<function name>]
            [-AFhP] [-p|–port <Socket PORT>]
            [-c|–count <number>] [-s|–snapLen <length>]
            [-o|–outfile <FILE>] [–console]
            [Flow filter options]
        == Create session to trace packets path ==
         pktcap-uw –trace
            [-AFhP] [-p|–port <Socket PORT>]
            [-c|–count <number>] [-s|–snapLen <length>]
            [-o|–outfile <FILE>] [–console]
            [Flow filter options]
The command options:
        -p, –port <Socket PORT>
                Specify the port number of vsocket server.
        -o, –outfile <FILE>
                Specify the file name to dump the packets. If unset,
                output to console by default
        -P, –ng   (only working with ‘-o’)
                Using the pcapng format to dump into the file.
        –console  (by default if without ‘-o’)
                Output the captured packet info to console.
        -s, –snaplen <length>
                Only capture the first <length> packet buffer.
        -c, –count <NUMBER>
                How many count packets to capture.
                Print this help.
        -A, –availpoints
                List all capture points supported.
                List all dynamic capture point functions supported.
        –capture <capture point>
                Specify the capture point. Use ‘-A’ to get the list.
                If not specified, will select the capture point
                by –dir and –stage setting
The switch port options:
(for Port, Uplink and Etherswitch related capture points)
        –switchport <port ID>
                Specify the switch port by ID
        –lifID <lif ID>
                Specify the logical interface id of VDR port
        –vmk <vmk NIC>
                Specify the switch port by vmk NIC
        –uplink <vmnic>
                Specify the switch port by vmnic
The capture point auto selection options without –capture:
        –dir <0|1>  (for –switchport, –vmk, –uplink)
                The direction of flow: 0- Rx (Default), 1- Tx
        –stage <0|1>  (for –switchport, –vmk, –uplink, –dvfilter)
                The stage at which to capture: 0- Pre: before, 1- Post:After
The capture point options
        -f [module name.]<function name>
                The function name. The Default module name is ‘vmkernel’.
                (for ‘Dynamic’, ‘IOChain’ and ‘TcpipDispatch’ capture points)
        –dvfilter <filter name>
                Specify the dvfilter name for DVFilter related points
Flow filter options, it will be applied when set:
        –srcmac <xx:xx:xx:xx:xx>
                The Ethernet source MAC address.
        –dstmac <xx:xx:xx:xx:xx>
                The Ethernet destination MAC address.
        –mac <xx:xx:xx:xx:xx>
                The Ethernet MAC address(src or dst).
        –ethtype 0x<ETHTYPE>
                The Ethernet type. HEX format.
        –vlan <VLANID>
                The Ethernet VLAN ID.
        –srcip <x.x.x.x[/<range>]>
                The source IP address.
        –dstip <x.x.x.x[/<range>]>
                The destination IP address.
        –ip <x.x.x.x>
                The IP address(src or dst).
        –proto 0x<IPPROTYPE>
                The IP protocol.
        –srcport <SRCPORT>
                The TCP source port.
        –dstport <DSTPORT>
                The TCP destination port.
        –tcpport <PORT>
                The TCP port(src or dst).
        –vxlan <vxlan id>
                The vxlan id of flow.

For more information, this KB has information.


How replace a C7000 Virtual Connect VC1/10 with a Flex-10 with VC Defined MAC & WWNs

So I recently had to upgrade a single C7000 enclosure from two VC1/10 modules with two 20 port 8Gb fiber cards to two Flex-10 modules with the two 20 port 8Gb fiber cards. We also wanted to upgrade the VC modules to 4.10. I was able to successfully by following this runbook. In this case, the customer had VC defined MAC and WWNs and wanted to ensure that they all stayed the same when we re-created the server profiles, see the note at the bottom in regards to that. This procedure also required an enclosure downtime for all blades in the enclosure we worked on.


1. Document via screen shot all server profiles, including multiple VLAN configurations

** Without this you will be HOSERED during the server profile rebuild. If booting from SAN make sure to get the WWNs and LUNs for the boot nodes.

2. Document via screen shot all assigned WWN and MAC addresses, including HP Pre-defined Ranges
3. Design new uplink configurations, active/active, active/passive, ensure free ports, etc.
4. Ensure a backup of the VC configuration is taken

– In VC Manager: Domain Settings > Backup/Restore
– Backup configuration
– Done

Downtime GO:

1. Ensure all Blades are powered off
2. Upgrade OA to 4.01

– Browse to OA interface
– Enclosure Information > Active Onboard Administrator > Firmware Update
– Upload the appropriate BIN file
– Ensure successful login

3. Run VC Healthcheck and document states

– vcsu -a healthcheck -i 10.X.X.X -u OAadministrator -p OApassword
– Document via screen shot health state

4. Unassign all server profiles from all servers
5. Delete the virtual connect domain

– In VC Manager
– Domain Settings > Configuration
– Delete Domain button
– Enter domain name
– wait 180 seconds

6. Power down modules from OA
7. Remove VC1/10 Modules from interconnect bays 1 & 2
8. Populate Flex-10 Modules in interconnect bays 1 & 2
9. Initialize Flex-10 Modules, assign them interconnect bay IP addresses in the OA
10. Navigate to the VC Module and begin the domain wizard setup

– Import the enclosure, choose new VC domain
– Name the VC domain
– Finish

11. Initiate the Networking Wizard

– Use VC defined MAC addresses, ensure that HP Predefined 1 sets are chosen
– Do not create any networks at the present Time
– Finish Wizard

12. Initiate the FC Wizard

– Use VC defined WWNs, ensure that HP Predefined 1 sets are chosen
– Define Fabric, Finish Wizard

13. Define networks, Define fabrics
14. Create and apply server profiles
15. Power server and Test Connectivity
16. Upgrade VC to 4.10

– vcsu -a update -i 10.X.X.X -u OAadministrator -p OApassword -l C:\vcfwall410.bin -vcu VCAdministrator -vcp VCpassword

– Ensure that any dependent modules (FC cards) are updated to match

– Wait for completion

17. Upgrade server firmwares

1. Mount the HP_Service_Pack_for_Proliant_2013.09.0-0_744345-001_spp_2013.09.0-SPP2013090.2013_0830.30.iso to the blade, boot to it and run the firmware update

18. Boot blades and ensure connectivity

** On step 14, we had to make dummy profiles that filled in MAC addresses and WWNs for profiles that had been deleted in-between the existing ones. You wouldn’t have to worry about this as long as you re-made the server profiles in the exact same order as they were originally made. Don’t worry if you accidentally get it out of order, just removing the NIC/HBA from the server profile will put the incorrectly used MAC/WWN back into the pool and will be used next.

** As a side note, HP doesn’t ever recommend using HP Pre-defined set #1, because its ranges can get stomped on if someone sets up another enclosure on #1 without checking first. In this case we didn’t have a choice.