VMware Horizon View: A VCDX-Desktop’s Design Philosophy Update

During the preparation for my VCDX-Desktop, I had the pleasure of meeting a lot of awesome folks. One of them shared this clip with me and felt it appropriate to share. It is 10 minutes long, but totally demonstrates why requirements which turn into moving targets can totally change iterations of a design.

I have been on a residency for the past few months in a 1,200 user VDI deployment for a customer in the financial industry, to aid in finishing the VDI rollout portion of the project which had already stalled several times. Yes, this means that as a VCDX-Desktop my first gig was in fact physically deploying zero clients and helping users with random enterprise applications and MS Access databases. Not exactly the high life, but a little bit of humble pie never hurt anyone. Eventually, I came to see this as an awesome experience that delivery consultants seldom get, which was the opportunity to see the VDI project all the way through to the end rather than simply the initial config, set up a few pools and then high fiving the team on the way out the door.

Here are my big picture cautions and updates in my design philosophy updates in no particular order. Please remember that as no two businesses are identical, neither are their VDI requirements. It’s extremely difficult to generalize about VDI except that the requirements vary widely, so you can apply a grain-full-of-salt here depending on your use cases. Also, some of this isn’t news, just public reiteration.

Storage

  1. I fail to understand why it continues to take a poorly performing virtual desktop for a storage team to finally arrive at the same conclusion that my Mom arrived at when standing in the laptop aisle at Best Buy. If you’re an average person who’s going to go buy a computer today, what’s one of the first things you’d look at considering for upgrade? It’s likely solid state storage for the disk holding the operating system and majority of applications. This hammers home on this point: if there is any chance that the underlying storage will not perform to the requirements AND THEN SOME…there is no way I would ever sign off on the configuration. If you look at what’s available in the market today, virtual desktops are competing with physical computers that run on a single solid state drive which provides enough storage performance AND THEN SOME. Considering this, the following storage recommendations are one’s I’ll make.
  2. Storage selected for the large scale VDI projects which have a chance of scaling beyond 500 users should:
    1. Be on a frame/solution separate not impacted server based disk workloads. Several customers I worked with that utilized a shared server/desktop based storage platform setup eventually ended up splitting it out. Just save yourself some time and hair pulling and do it up front. You should already be doing this for hosts, why wouldn’t you for storage as well?
    2. Should be high performing with at least 97% of all writes going directly to flash or cache hits (VDI is generally 80/20 write/read, so write performance is extremely important), much more important than most replica tiering and VSA documents which offload read performance would have you believe.
    3. Should deliver a consistently low latency. The highest physical device command latency a that should EVER be experienced on a host with virtual desktops in my experience is 7ms.  The lower you can get consistently the better.
    4. Should be designed with a random spiky write heavy workload in mind
    5. Does your storage team have time to validate that all desktops are meeting the above requirements all the time? If not, then just push the all flash array performance win button. If you size the church for Easter Sunday, it’s safe to say it will work the rest of the year too.
    6. I really enjoy the hyper converged model for a couple reasons:
      1. Inherently segregated from other disk workloads
      2. The Virtual Desktop team is the storage team
      3. Predictable cost that scales in smaller outlays of cash at around 90 users per host rather than large outlays for additional disk trays, storage nodes, storage fabric ports
      4. Generally cheaper per desktop
      5. If you utilize VMware’s Virtual SAN, here’s a few tips: Minimum 4 nodes, make sure to only use 15K magnetic disks, double check the HCL for all components but specifically the disk controllers and give yourself more SSD space than you think you need. Here’s a great reference architecture if you’re looking at this route.
  3. Do your bake off and functional requirement homework. Select the storage solution that gives you the best performance and meets your requirements with the lowest risk. Speak to customers that have the solutions in prod and have performed code updates. I get that cost is always a factor, but I need to stress again that this storage needs to be rock solid from both a performance and availability standpoint.
  4. Regardless of the solution chosen, storage performance must be validated before going live in production. This means either use View Planner or pony up for LoginVSI. This step gets missed ALL THE TIME. Trust everyone and cut the deck. If you fail on this step you’ll find out once about performance bottlenecks once you already have users in production on the hardware. Trust me when I say you don’t want that.

VM Sizing/Golden Images/Windows Optimization

  1. For all virtual desktop designs I’m composing in the future involving Windows 7, the rule rather than the exception is the Knowledge Worker 2 vCPU 2GB of RAM desktop. Ever since my VCP4 test days, I’ve stuck with the good ole mantra of “Start with a single vCPU and scale up from there”. Well ladies and gentlemen I’ve changed my stance on that for virtual desktops. Think about it this way. You’d get looked at like an idiot if you bought a new PC off the shelf at Best Buy with a single threaded CPU. You can get by with them for SUPER cotton candy workloads and people who are using task worker scenarios, however I’m finding more often that this is definitely the exception rather than the rule, especially as such for desktop replacement use cases. If you have a true task worker use case you can get by with 2 cores per socket, just validate that the IE and local app performance is acceptable. Not to mention that is the PCoIP server service can consume quite a bit of CPU for graphically intensive screen changes, and when that does for example you run the risk of one application (Internet Explorer) spike to 100% impacting the quality of the session and potentially freezing. I based a lot of my sizing recommendations around this document here titled Server and Storage Sizing Guide for Windows 7 Desktops in a Virtual Desktop Infrastructure, which is a decent document, check it out if you’re interested.
  2. I do not recommend that you utilize an x64 Windows VDI image unless there’s a requirement that pushes above the  4GB memory threshold, or other application requirements. The simple reason is that enterprise application compatibility doesn’t always follow the closest to the bleeding edge. Also it generates a ton of confusion for DSN locality, IE versions and corresponding plugins, java versions, complexities with Thinapps, etc. Ask yourself this question: is every end user application in my enterprise compatible with x64 based Windows TODAY? If the answer is yes, fire away. My guess for large organizations is that it isn’t and you will eventually be doing what I have been doing recently and griping every time something is native not 32 and vice versa. COE stands for COMMON operating environment, not uncommon.

Enterprise Application Management

  1. If you don’t have an enterprise application management and client OS strategy, you don’t have a VDI strategy. The VDI is really just another way of accessing the same applications, user data and personalization that would otherwise be available on a physical machine. This means you must determine what app deployment methods you are using, whether locally installed, Thinapp, App-V, XenApp or another app management solution. As a part of this strategy, you must address printing, AppData directories and persistence of the look and feel of the applications and their settings, whether with a profile based products like Liquidware Labs Profile Unity, AppSense or with encompassing products like Unidesk and AppVolumes. Also, you need to account for the biggest problem with stateless desktops: the ability for users to install and persist their own applications on high performing disk. A lot of times this can be something simple like a web plugin, something that helpdesk can handle on a physical desktop no-problem. Depending on the use case may depend on whether or not apps need to be installed by users or “One-Off” installs, or if the Iron fist of IT rules the desktop and approvals and installs apps no matter their size.

Systematic Rollout Plan

  1. You must have a rollout plan, and more importantly YOU MUST ADHERE TO IT. “If you aim at nothing, you’re certain to hit it every time.” If you don’t have an implementation plan you’re destined to fail even with best in class hardware and network performance. However, you can certainly fail after generating a plan by simply not adhering to it. Disclaimer: I work for AdvizeX technologies and several of our Sr. EUC consultants along with myself have together come up with our own methodology for engaging business departments independently and ensuring that all requirements in a predictable, repeatable and documented fashion while ensuring that end user satisfaction is met. Typically determining the requirements is done on a full plan and design, however it’s almost impossible to answer the question: “What does everyone do everywhere” in the timeline that a Plan and Design project gives. Usually guard rails keep Plan and Design projects to several use cases, as decided at the beginning of the engagement. This means that you must have a process when you go to engage a department that wasn’t on your plan and design with application specifics.

These are just some of the tips I’ve picked up on the most recent project that has spanned the last few months. I welcome any open conversation and fearless feedback on the manner, as civil disagreements most often lead to positive change for the better.

Upgrading to Horizon View 6 – Part 5 – Desktops

Upgrading to Horizon View 6 – Part 1 – Prepwork

Upgrading to Horizon View 6 – Part 2 – View Composer

Upgrading to Horizon View 6 – Part 3 – Connection Servers

Upgrading to Horizon View 6 – Part 4 – Security Servers

Upgrading to Horizon View 6 – Part 5 – Desktops

Good news! We’re near the end! This part is definitely the lowest stress part of the upgrade, so wipe that sweat from your brow and grab a lemonade.

Since in this series we didn’t upgrade vCenter or ESXi, there’s no need to upgrade VMware tools. However, if we had, this would be where we would update tools, followed by the VMware View agent in the VM.

Just like every other update, if you’re using Linked Clones, it’s just a golden image update and recompose. If you’re using full clones, you’ll need to deploy it however you usually deploy new software, GPO, SCCM, Altiris whatever.

Here is the next next procedure screenshots, not much notes here.

2014-09-26_18-55-32Make sure to choose the correct view agent for your CPU architecture, x86_64 means 64 bit OS. 2014-09-26_18-57-302014-09-26_18-59-092014-09-26_18-59-22

Waahooo! Look at that! They dun there rolled all the HTML, RTAV and vCOPS straight into the View Agent Binary. Upgrades that roll in feature packs are just dandy. Of note, make sure to select SmartCards if you require those, it’s the only option not selected by default.  2014-09-26_18-59-312014-09-26_18-59-382014-09-26_19-01-022014-09-26_19-01-502014-09-26_19-01-54And then either recompose or script the update to your full clones.

Wowee! Look at that! We’re done! Hope this blog post series was informative, helpful and not too sarcastic sounding.

done

 

 

Playing Starcraft II on a Horizon View Desktop

Every once in a while I get to take off the VDI fixer/architect hat and put on the fun-loving gamer hat. This was one of those occasions. This is an older video on View 5.3 that I just now got around to posting, I lost it on my harddrive for a while and didn’t have time to look for it. It’s truly amazing what you can accomplish with nVidia Grid cards, Apex Cards and PCoIP.

Horizon View Design Decision: Should Composer be Co-Located or Standalone?

View Composer: Standalone or Co-Located with vCenter

The View Composer services which are responsible for provisioning, recomposing, rebalancing and refreshing linked clone desktops can be set up either as either a standalone machine or co-located with vCenter.

Option 1 – Co-Located Composer

Advantages:

  • This traditionally was the only option before View 5.1 was released. The ability to support standalone install was introduced to provide support for the VCSA.
  • Ensures that the all provisioning services reside on a single server; all provisioning services will be started together after the server is restarted
  • Reduced chance of a network/firewall problems between vCenter and Composer

Drawbacks:

  • Resources must be adjusted for the both services by the entire virtual machine
  • Not an option when utilizing the VCSA

Option 2 – Standalone Composer

Advantages:

  • Dedicated resources for each can be adjusted individually for each service
  • Required when utilizing the VCSA

Drawbacks:

  • Services are located on multiple machines, which means there is an increased chance that a network or firewall problem may cause an issue
  • Requires a separate virtual machine which overall increases management overhead for the environment

View Composer Cautions regardless of design choice:

  • Each View Composer instance and associated vCenter must be a 1:1 mapping
  • Ensure that the vCenter and/or vCenter and composer servers have enough resources per the documentation to perform their respective roles (see links in sources below for details)
  • It is recommended that all SQL databases be run on a separate machine
Design Quality Co-Located Standalone Comments
Availability o o The availability of each of these solutions is tied directly to the availability of the management cluster; all of them depend on vSphere HA to be restarted in the event of a failure of host.
Manageability Placing the composer on a standalone server increases the management overhead.
Performance o In a standalone configuration, the performance of the virtual machines is correlated with each service’s performance characteristics, rather than a single VM. Important: this design decision does NOT impact performance of virtual desktops, rather it is merely the isolation of server resources that is in question.
Recoverability o In the event that SRM is utilized to failover the management cluster, having a co-located instance drastically simplifies recovery.
Security o o None of the options have a positive or negative impact on securability as both options can utilize either SQL or Integrated authentication for the database.

Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality.

Recommendation

Real world consulting experience has shown me that a 4 vCPU 16GB vCenter 5.5 with View Composer co-located can manage a full 2000 powered on linked clone virtual machines as well as a management cluster. Once scaling beyond 2000 linked clone desktops, you need to take into consideration the recompose time, which is not limited by actions initiated by View composer, but rather by the vCenter and ESXi limited concurrent queue of operations (See fig. 1 below). I do not recommend allowing a single linked clone block to go above 2000 desktops.

Typically View Composer does not consume a large amount of CPU and Memory resources and the database should not grow beyond 5GB. If there is a requirement to use the VCSA, standalone is your only choice. If VCSA is not a requirement, I typically recommend co-located for increased simplicity and reduced management overhead.  Both are fully supported configurations, but considering that View Composer is not a resource intensive service and I prefer simplicity wherever possible in my designs, I typically prefer co-located even for large scale designs.

Sources

Horizon View 6.0 – vCenter Server and View Composer Virtual Machine Configuration

https://pubs.vmware.com/horizon-view-60/index.jsp#com.vmware.horizon-view.planning.doc/GUID-E5BEA591-D474-4CEE-9646-E9FB3CAF87B4.html

Horizon View 5.3 – vCenter Server and View Composer Virtual Machine Configuration

http://pubs.vmware.com/view-52/index.jsp#com.vmware.view.planning.doc/GUID-E5BEA591-D474-4CEE-9646-E9FB3CAF87B4.html

Fig. 1. Concurrent Operations with Linked Clone Desktops

ConcurrentOperations

This figure is from VMworld 2012 EUC1470 – “Demystifying Large Scale Enterprise View Architecture: Illustrated with

Lighthouse Case Studies”, by Lebin Cheng, VMware, Inc, and John Dodge, VMware, Inc.

Upgrading to Horizon View 6 – Part 4 – Security Servers

Upgrading to Horizon View 6 – Part 1 – Prepwork

Upgrading to Horizon View 6 – Part 2 – View Composer

Upgrading to Horizon View 6 – Part 3 – Connection Servers

Upgrading to Horizon View 6 – Part 4 – Security Servers

Upgrading to Horizon View 6 – Part 5 – Desktops

Remember our prepwork from the first post:

Prepare for View Security Servers (As in you’re about to perform the upgrade)

  • Make sure you understand whether or not you’re using IPsec tunneling between your connection broker and and security server. Understand the implications that the Windows firewall must be running.
  • In the View Administrator, select the security servers and click More Commands > Prepare for Upgrade or Reinstallation. This will disable the aforementioned IPsec rules if they are enabled.

Let’s go!

Prepare for Upgrade or Reinstall if you need to break the IPsec tunnel between connection broker and security server. vss01 Specify the security server paring passwordvss02 vss03 Gogo upgrade on VSS1!vss04 vss05 vss06 vss07 Your information here should be the URL on the public facing side, and your public IP address for the PCoIP external URL. vss08 vss09 vss10 vss11 One down, one to go. vss12 Make sure to specify the paring password and prepare for upgrade BEFORE clicking here. Revert back to the beginning if you can’t remember how. vss13a vss13b vss14 vss15 vss16 vss17 vss18

Huzzah! Security servers upgraded to 6.0. Next up, VM side!

vss22