How to Extend XtremIO Volumes

Want to extend an XtremIO volume? Simply right click on it and say modify volume. Pretty darn easy. Was somewhat annoyed to find that this information isn’t exactly readily available from “the Google”, so I decided to write this up after verifying it is a non-destructive process from my EMC home boys. This was performed on an XtremIO running on the 3.0.0 build 44 code. Not sure if it will work for shrinking a volume, but perhaps someone can comment up with that information.

xio1 xio2 xio3 xio4



Posted in Uncategorized

UCS, View, Virtual SAN Reference Architecture: Cool! But did you think about…

I really appreciate the article titled “Myth versus Math: Setting the Record Straight on Virtual SAN” as written by Jim Armstrong. He highlights some of the same points I’m going to make in this post. Virtual SAN is a very scalable solution and can be configured to meet many different types of workloads. Also he notes that reference architectures are a starting point, not complete solutions. I am a big fan of hyper-converged architectures such as Virtual SAN. I’m also a big fan of Cisco based server solutions. However, I’ve also had to correct underperforming designs that were based entirely off of reference architectures, and as such I want to ensure that all the assumptions are accounted for with a specific design.  One of the reasons I’m writing this is because I have recently delivered a Horizon View Plan and Design which included a Virtual SAN on Cisco hardware, so I wrestled with these assumptions not too long ago. The secondary reason is because this architecture utilizes “Virtual SAN Ready Nodes”, specifically as designed for virtual desktop workloads. Using this vernacular brings trust and immediate, “Oh neat no thinking required for sizing” to most, but I would argue you still have to put on your thinking cap and make sure you are meeting the big picture solution requirements, especially for a greenfield deployment.

The reference architecture that I recently read is linked below.

Remember, reference architectures establish what is possible with the technology, not necessarily what will work in your house. Let’s go through the assumptions in the above setup now.

Assumption #1: Virtual Desktop Sizing Minimums are OK

Kind of a “duh”. Nobody is going to use big desktop VMs for a reference architecture because the consolidation ratios are usually worse. If you read this article titled “Server and Storage Sizing Guide for Windows 7 Desktops in a Virtual Desktop Infrastructure” it highlights minimum and recommended configurations for Windows 7, which includes the below table.


Some (not all) of the linked clones were configured in this reference utilize 1.5GB 1 vCPU which is below the recommended value. I’ve moved away from the concept of starting with 1 vCPU for virtual desktops regardless of the underlying clock speed, simply due to the fact that a multi-threaded desktop performs better given that the underlying resources can satisfy the requirements. I have documented this in a few previous blog posts, namely “Speeding Up VMware View Logon Times with Windows 7 and Imprivata OneSign” and “VMware Horizon View: A VCDX-Desktop’s Design Philosophy Update“. In the logon times post, I found that simply bumping from 1 vCPU to 2 reduced 10 seconds in the logon time in a vanilla configuration.

Recommendation: Start with 2 vCPU and at least 2GB from a matter of practice with Windows 7 regardless of virtual desktop type. You can consider bumping some 32 bit use cases down to 1 vCPU if you can reduce the in-guest CPU utilization by doing things like offloading PCoIP with an Apex card, or utilizing a host based vShield enabled AV protection instead of in-guest. Still, you must adhere to the project and organizational software requirements. Size in accordance with your desktop assessment and also POC your applications in VMs sized per your specs. Do not over commit memory.

Assumption #2: Average Disk Latency of 15ms is Acceptable

This reference architecture assumes from a big picture that 15ms of disk latency is acceptable, per the conclusion. My experiences have shown me that between 7ms of average latency is the most I’m willing to risk on behalf of a customer, with the obvious statement “the lower the better”. Let’s examine the 800 linked clone desktop latency. In that example, the average host latency was 14.966ms. Most often times requirement #1 on VDI projects I’m engaged on is acceptable desktop performance as perceived from the end user. In this case, perhaps more disk groups, more magnetic disks or smaller and faster magnetic disks should have been utilized to service the load with reduced latency.

Recommendation: Just because it’s a hyper converged “Virtual SAN Ready Node” doesn’t mean you’re off the hook for the operational design validation. Perform your View Planner or LoginVSI runs and record recompose times on the config before moving to production. When sizing, try to build so that average latency is at or below 7ms (my recommendation), or whatever latency you’re comfortable with. It’s well known that Virtual SAN cache should be at least 10% of the disk group capacity, but hey, nobody will yell at you for doing 20%. I also recommend the use of smaller 15K drives when possible in a Virtual SAN for VDI, since they can handle more IO per spindle, which fits better with linked clones. Need more IO? Time for another disk group per host and/or faster disk. Big picture? You don’t want to dance with the line of “enough versus not enough storage performance” to meet the requirements in a VDI. Size appropriately in accordance with your desktop assessment, beat it up in pre-prod and record performance levels before users are on it. Or else they’ll beat you up.

Assumption #3: You will be handling N+1 with another server not pictured

In the configuration used, N+1 is not accounted for. Per the norm in most reference architectures, it’s taken straight to the limit without thought for failure or maintenance. 800 desktops, 8 hosts. The supported maximum is 100 Desktops/Host on Virtual SAN due to the maximum amount of objects supported per host (3000) and the amount of objects and linked clones generate. So while you can run 800 desktops on 8 hosts, you are officially at capacity. It is noted that several failure techniques were demonstrated in the reference, however the failure of an entire host was not one of them. Simulated failure of SSD Disk, magnetic disk and network were tested but an entire host failure was not simulated.

Recommendation: Size for at least N+1. Ensure you’re taking 10% of the pCores away when running your pCores/vCPU consolidation ratios for desktops to account for the 10% Virtual SAN CPU overhead. Ensure during operational design validation that the failure of a host including a data resync does not increase disk workload in such a fashion that it negatively impacts the experience. In short, power off a host in the middle of a 4 hour View Planner run and watch the for disk latency one hour later during the resync.

Assumption #4: Default policy of FTT=0 for Linked Clones is Acceptable

I disagree with the mindset that FTT=0 is acceptable for linked clones. In my opinion it provides a low quality configuration as the failure of a single disk group can result in lost virtual desktops. The argument in favor of FTT=0 for stateless desktops is “So what? If they’re stateless, we can afford to lose the linked clone disks, all the user data is stored on the network!” Well that’s an assumption and a fairly large shift in thinking especially when trying to pry someone away from the incumbent monolithic array mentality. The difference here is that the availability of the storage on which the desktops reside is directly correlated with the availability of the hosts. If one of the hosts restarts unexpectedly, with FTT=0 the data isn’t really lost, just unavailable while the host is rebooting. However if a disk group suffers data loss, it makes a giant mess of a large number of desktops that you end up having to clean up manually, since the desktops are unable to be removed from the datastore in an automated fashion by View Composer. There’s a reason the KB which describes how to clean up orphaned linked clone desktops is complete with two YouTube videos. This is something I’ve unfortunately done many times, and wouldn’t wish upon anyone. Another point of design concern is that if we’re going to switch the linked clone storage policy from FTT=0 to FTT=1, now we’re inducing a more IO operations which must be accounted for across the cluster during sizing. Another point of design concern is that without FTT=1, the loss of a single host will result in some desktops that get restarted by HA, but also some that may only have their disk yanked. I would rather know for certain that the unexpected reboot of a single host will result only in the state change of desktops located on that host, not a state change of some desktops and the storage availability for others called into question. The big question I have in regards to this default policy is: Why would anyone move to a model that introduces a SPOF on purpose? Most requirements dictate removal of SPOFs wherever possible.

Recommendation: For the highest availability, plan for and utilize FTT=1 for both linked clone and full desktops when using Virtual SAN.

The default linked clone storage policy is pictured below as installed with Horizon View 6.


Assumption #5: You’re going to use CBRC and have accounted for that in your host sizing

Recommendation: Ensure that you take another 2GB of memory off each host when sizing to allow for the CBRC (content based read cache) feature to be enabled.

That’s all I got. No disrespect is meant to the team of guys that put that doc together, but some people read those things as the gold standard when in reality:



Posted in Virtual SAN, VMware

AppVolumes: Asking the Tough Questions

I had some of these tough questions about AppVolumes when it first came out. I set out to answer these questions with some of my own testing in my lab. These questions are beyond the marketing and get down to core questions about the technology, ongoing management and integrations. Unfortunately I can’t answer all of them, but these are some of my observations after working in the lab with it and engaging some peers. Much thanks to @Chrisdhalstead for the peer review on this one. Feedback welcome!
Q. Will AppVolumes work running on a VMware Virtual SAN?
A. Yes! My lab is running entirely on the Virtual SAN datastore and AppVolumes has been working great for me on it.
Q. Can I replicate and protect the AppStacks and Writable volumes with vSphere Replication or vSphere Data Protection, or is array based backup/replication the only option?
A. At the time of writing I’m pretty sure the only way to do this is to replicate the AppStacks is with array based replication. Of course, you could create a VM that serves storage to the hosts and replicate that, but that’s not an ideal solution for production. Hopefully more information will come out on this in the near future. This means that currently, there’s no automated supported method that I’m aware of to replicate the volumes when Virtual SAN is used for the AppStacks and writable volumes. Much thanks to the gang on twitter that jumped in on this conversation: @eck79, @jshiplett, @kraytos, @joshcoen, @thombrown
Q. So let’s talk about the DR with AppVolumes. Suppose I replicate the LUNs with array based replication that has my AppStacks and Writable Volumes on it. What then?
A. First of all, this is a big part of the conversation, and from what I hear there are some white papers brewing for this exact topic. I will update this section when they come out. My first thought is for environments that have the highest requirements for uptime which are active/active multi-pod deployments. My gut reaction says that while you can replicate, scan and import AppStacks for the sake of keeping App updating procedures relatively simple, I’m not sure how well that’s going to work with writable volumes when there is a user persona with actively used writable volumes in one datacenter. The same problem exists for DFS, you can only have a user connect from one side at the time, (connecting from different sides simultaneously with profiles is not supported) so we generally need to give users a home base. However, DFS is an automated failover in that replication is constantly synching the user data to the other site and is ready to go. So for that use case, first determine if profile replication is a requirement or if during a failure it’s OK for users to be without profile data. If keeping the data is a requirement, I’d say you should probably utilize a UEM with replicating shares with DFS or something similar. For active/warm standby, the advice is the same. For active/standby models, you’ll have to decide if you’re going to replicate your entire VDI infrastructure and your AppVolumes, or if you’re going to set up infrastructure on both sides.
Q. But after replication how will AppVolumes know what writable volumes belong to what user? Or what apps are in the AppStacks?
A. That information is located in the metadata. It should get picked up when re-scanning the datastore from the Administrative console. I’m sure there’s more to it than that, but I don’t have the means to test a physical LUN replication at the moment. Below is an example of what is inside the metadata files. This is what’s picked up when the datastore is scanned via the admin manager.
Q. What about backups? How can I make sure my AppStacks are backed up?
A. Backing up the SQL database is pretty easy and definitely recommended. To back up the AppStacks, that means that you should consider and evaluate a backup product that will allow for granular backups down to the virtual disk file level on the datastore. This doesn’t include VDP. Otherwise you’re going to have to download the entire directory the old fashioned way, or scp the directory to a backup location on a scheduled basis.
Q. In regards to application application conflicts between AppStacks and Writable Volumes, who wins when different versions of the same app are installed? How can I control it?
A. My understanding is that the last volume that was mounted wins. In the event a user has a writable volume, that is always mounted last. By default, they are mounted in order of assignment according to the activity log in my lab. If there is a conflict between AppStacks, you can change the order in which they are presented. However, you can only change the order for all AppStacks that are assigned to the same group or user. For example, if you have 3 AppStacks and each of them are assigned to 3 different AD groups, there’s no way to control the order of assignment. If 3 AppStacks that are all assigned to the same AD group, you can override the order in which they are presented under the properties of the group.
Q. What happens if I Update an AppStack with an App that a User already installed in their writable volume? For example, if a user installed Firefox 12 in their writable volume, then the base AppStack was updated to include Firefox 23, what happens when the AppStack is updated?
A. My experience in the lab shows that when a writable volume is in play for a user, anything installed there that conflicts will trump what is installed in an AppStack. Also all file associations regardless of applications installed in the underlying AppStacks are trumped by what has been configured in the user profile and persisted in the writable volume. Also, my test showed that if you remove the application from the writable volume and refresh the desktop, the file associations will not change to what is in the base AppStack.
After the update:
Files in the C:\Program Files\ Directory were still version 12.
Q. What determines what is captured via UIA on a user’s writable volume and what is left on the machine?
A. Again, another thing I’m not sure of. I know that there must be some trigger or filter but it is not configurable. I was able to test and verify that all of these files are kept:
     All Files in the users’ profile
     HKCU Registry Settings
     Windows Explorer View Options such as File Extensions, hidden folders
Q. Can updates be performed to the App Stacks without disruption of the users that have the App Stack in use?
A. Yes. While it is true that you must unassign the old version of the AppVolume first, you can choose to wait until next logon/logoff. Then you can assign the new one and also choose to wait until the next logon/logoff. There is a brief moment after you unassign the AppStack that a user might log in and not get their apps, so off hours is probably a good time to do this in prod unless you’re really fast on the mouse.
Q. If one AppStack is associated to one person, what about people that need access to multiple pools or RDSH apps simultaneously?
A. A user can have multiple AppStacks that are used on different machines. You can break it down so they are only attached on computers that start with a certain hostname prefix and OS type.
Q. So if an RDSH computer has an AppStack assigned to the computer, can you use writable volumes for the user profile data?
A. No. It is not supported to have any user based AppStack or writable volumes assigned with a computer based AppStack is assigned to the machine.
Q. So can I still use AppVolumes to present apps with RDSH?
A. Sure you can, it just has to be machine based assignments only; you just can’t use user based assignments or writable volumes at the same time. While we’re on the topic there’s no reason you couldn’t present AppStacks to XenApp farms too, it’s what CloudVolumes originally did. My recommendation is to use a third party UEM product for profiles when utilizing RDSH app delivery, trust me that it’s something you’re going to want granular control over.
Q. What about role based access controls for the AppVolumes management interface? There’s just an administrator group?
A. Correct, there’s no RBAC with the GA release.
Q. Does this integrate with other profile tool solutions?
A. Yes. I’ve used Liquidware Labs Profile Unity to enable automatic privilege elevation for installers located on specified shares or local folders. This way can install their own applications (providing they’re located on a specified share) which will be placed in the writable volume, or they can one-off update apps that have frequent updates without IT intervention right in the writable volume.
Posted in Uncategorized

Horizon View Composer SQL Login Password Expiration

So the SQL account password expired for your View Composer. Bummer. This must mean you’re here because you’re seeing and googled error messages like this in event Viewer under VMware View Composer:

“[28000][Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user ‘<user>’


and this in the %PROGRAMDATA%\VMware View Composer\Logs\vmware-viewcomposer.log


NHibernate.ADOException: cannot open connection —> System.Data.Odbc.OdbcException: ERROR [28000] [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user ‘<user>’. Reason: The password of the account has expired. ERROR [28000] [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user ‘<user>’. Reason: The password of the account has expired.

Good news space heroes, there’s a really well documented VMware KB procedure on how to recover from just such a scenario. Other things to try before doing this procedure:

  • Verify the SQL server and composer database are online
  • Verify the View Composer DSN is in place on the composer server
  • Verify the account is enabled and you can login, testing the DSN with the credentials you used.
  • Verify the account permissions haven’t changed on the View Composer Database
  • Verify the database is intact

In the recent case of quickly fixing this that I encountered, we changed the password to the same password and that still didn’t work. Following this procedure of re-creating the SviWebService.exe.config worked. This is because the password is encrypted and stuffed into the config file using the certificate data. The basic steps that are in this article are:

  1. Find the SviWebService.exe.config file in the DRIVE:\Program Files (x86)\VMware View Composer directory
  2. Copy the SviWebService.exe.config file to a temp directory like C:\tmp
  3. Copy the SviWebService.exe.config file to a temp directory like C:\tmp and rename it to SviWebService.exe.config.old
  4. Open a command prompt and cd to DRIVE:\Program Files (x86)\VMware View Composer directory
  5. Open a notepad and paste this command in, make sure to modify the bolded parts to fit your environment:
    1. SviConfig -operation=SaveConfiguration -dsnname=DSNNAME -username=USERNAME -password=PASSWORD -passwordchanged=true -NHibernateDialect=NHibernate.Dialect.MsSql2005Dialect -tcpport=18443 -KeyContainerName=SviKeyContainer -keySize=2048 -CertificateThumbprint=”B632769D553604A14361142016189BD5F5722267” -ConfigPath=”C:\tmp\SviWebService.exe.config” -OldConfigPath=”C:\tmp\SviWebService.exe.config.old”
    2. You can get the Certificate thumbprint by opening the original SviWebService.exe.config in notepad and searching for CertificateThumbprint.
    3. DSNNAME is the name of the DSN as it appears in ODBC administrator. I did a configure on it then a copy paste for ensured accuracy.
  6. Once you run that you should see configuration saved successfully. composer3
    1. If you see SviConfig finished with an error. Exit code: 5 you probably did something wrong like copy the wrong config file or didn’t do one of the file copies right. For example I copied the wrong config file the first time, then didn’t rename them exactly right.
    2. If you are sure that you copied it right and are still getting an error, sounds like you’ll need to make a backup of the SQL database and do a View Composer uninstall/reinstall.
  7. The final step before starting the service is to copy the modified SviWebService.exe.config file to the DRIVE:\Program Files (x86)\VMware View Composer directory. composer4Works! Huzzah! Now to go talk to that DBA about service account expiration policies…composer5
Posted in Uncategorized

VMware View Windows 7 Desktop Hanging on Shutdown: Doesn’t Close Apps

Are you tired of opening the vSphere console on a desktop that wouldn’t be re-composed only to find this?

Waiting for background programs to close. Force log off.



Me too. These registry keys will force the closing of open applications. I had to post this just so I would have quick reference to it in the future. Import these registry keys into your golden image and say goodbye to desktops that won’t shut down. When I say shut down you say HOW FAST. That’s right.

Windows Registry Editor Version 5.00

[HKEY_USERS\.DEFAULT\Control Panel\Desktop]


Posted in Uncategorized

AppVolumes is the Chlorine for your Horizon View Pools

Actual things I’ve said:

“So, you have 32 different pools for 400 people?”

“So, you have one full virtual for every employee worldwide?”

When I’ve said these things or things like it, I usually get the Nicholas Cage look back, like “DUDE I KNOW…It’s not like I had a choice”

It seems that plenty of folks have had the same problem of pool sprawl due to application compatibility issues, or things they were unable to script or work around. Having been in delivery in the virtual desktop space means that I have seen and lived with the problems of a stateless desktop environment, most of which can be overcome with some time, energy and willingness to script. Most of these obstacles are profile related, and can be easily mopped up with third party tools, but the focus of this post is what happens if we stay inside the VMware product suite without any third party extras. So the main painpoints that remain apart from profile and application setting abstraction are:

  • What happens if a user legitimately needs to install their own apps, they want it to persist on the local drive for best performance, and they’re already using a stateless desktop?
  • How can we consolidate as many users into the same desktop pool as possible without application dependency contention?
  • How can we granularly control and minimize risk during locally installed application updates? Also, how can we do it faster?

What ends up happening to most organizations is that we end up getting bacteria-like applications which cause admins to create separate pools or put a user on a full virtual. The VDI admins often times end up getting torqued at the “fringe apps” that are not central to the core business but still need to run in the virtual desktops. This is usually due to the fact that there are more use cases in a department than are initially guessed. A good example is when a Marketing department is actually 4 different use cases rather than 1 due to crazy one-off app requirements. Along those lines, these are some of the problems I’ve seen with some of the fringe “problem child” applications:

  • No support for AppData redirection
  • Large application in size which drives poor performance on network share or when virtualized
  • Vendor requires locally installed application
  • Apps that require multiple conflicting dependencies such as verisons Microsoft Office or Office Templates
  • Apps that require weekly or monthly updates that users don’t want to have to perform only once per interval (and View admins certainly don’t want to recompose for, ugh)
  • Security requirements that dictate applications cannot be installed for users that need them installed, even if locking them down another way is available.
  • Countless “one-off” applications that will not virtualize, sequence or otherwise be delivered unless they are locally installed

Let’s take a look at an image from the AppVolumes page which clearly illustrates the difference between a locally installed or streamed from the network or remotely hosted application.


While AppVolumes won’t kill the bacteria in your swimming pool, it will most certainly allow you to clean and clear up your View pools and pave the way to a single golden image dream with instant and simple application delivery along with much cleaner and safer update methods with a faster rollback. It will mean that the only legitimate need to create a new pool is for resource differences in the VM containers themselves, and the golden images themselves will mostly likely have no apps installed locally in it. This is absolutely FANTASTIC. Under the covers, what’s really happening is that the applications and settings are being captured to a VMDK which is mounted to the capture machine, then the application is installed to it. It’s sealed, and then disconnected and hot added to multiple VMs simultaneously. The craziest part of this whole method of updating the installed applications means that you almost never have to kick a user out of their VM. However in the software as is, you have to unpresent the AppStack, then re-present the updated version during an update, so it’s a not an entirely seamless AppStack update procedure, but it’s a heck of a lot better than dealing with a recompose that impacts more departments than it needs to due to the use of a COE pool (common operating environment) which I and others preach when using stateless desktops use cases. Once I get some hands on with the product I’ll know more about the update procedure and will write about it for sure, but apparently it’s been a pretty hard to get hands on even for partners.

So what does this mean looking forward? It means that you can crack open a can of Happy New Year 2015, and start fresh. REALLY. Fresh. It’s the opportunity to start with a clean slate and keep it clean as a whistle, while maintaining persistent user installed applications, per department applications, and one off applications. Not to mention if you own Horizon Enterprise, you will get AppVolumes for free out of the gate! Cheers to that!

I’m going to try to start a new App Volumes themed blog post series for those like myself waiting for Cloud Volumes to become GA. The following are some of the topics I’d like to write on. If you’d like to hear one of them, please sound off in the comments, it’s a good source of motivation. : )

  1. Optimize Your Windows Base Image Like a Boss
  2. AppVolumes: Best Practices for AppStacks and Updating
  3. AppVolumes compared with RDSH for One-Off Application Delivery
  4. A Real World Plan for Migrating to AppStacks from Traditional LC Pools


Posted in AppVolumes, Horizon View 6.0, View

VMware Horizon View: A VCDX-Desktop’s Design Philosophy Update

During the preparation for my VCDX-Desktop, I had the pleasure of meeting a lot of awesome folks. One of them shared this clip with me and felt it appropriate to share. It is 10 minutes long, but totally demonstrates why requirements which turn into moving targets can totally change iterations of a design.

I have been on a residency for the past few months in a 1,200 user VDI deployment for a customer in the financial industry, to aid in finishing the VDI rollout portion of the project which had already stalled several times. Yes, this means that as a VCDX-Desktop my first gig was in fact physically deploying zero clients and helping users with random enterprise applications and MS Access databases. Not exactly the high life, but a little bit of humble pie never hurt anyone. Eventually, I came to see this as an awesome experience that delivery consultants seldom get, which was the opportunity to see the VDI project all the way through to the end rather than simply the initial config, set up a few pools and then high fiving the team on the way out the door.

Here are my big picture cautions and updates in my design philosophy updates in no particular order. Please remember that as no two businesses are identical, neither are their VDI requirements. It’s extremely difficult to generalize about VDI except that the requirements vary widely, so you can apply a grain-full-of-salt here depending on your use cases. Also, some of this isn’t news, just public reiteration.


  1. I fail to understand why it continues to take a poorly performing virtual desktop for a storage team to finally arrive at the same conclusion that my Mom arrived at when standing in the laptop aisle at Best Buy. If you’re an average person who’s going to go buy a computer today, what’s one of the first things you’d look at considering for upgrade? It’s likely solid state storage for the disk holding the operating system and majority of applications. This hammers home on this point: if there is any chance that the underlying storage will not perform to the requirements AND THEN SOME…there is no way I would ever sign off on the configuration. If you look at what’s available in the market today, virtual desktops are competing with physical computers that run on a single solid state drive which provides enough storage performance AND THEN SOME. Considering this, the following storage recommendations are one’s I’ll make.
  2. Storage selected for the large scale VDI projects which have a chance of scaling beyond 500 users should:
    1. Be on a frame/solution separate not impacted server based disk workloads. Several customers I worked with that utilized a shared server/desktop based storage platform setup eventually ended up splitting it out. Just save yourself some time and hair pulling and do it up front. You should already be doing this for hosts, why wouldn’t you for storage as well?
    2. Should be high performing with at least 97% of all writes going directly to flash or cache hits (VDI is generally 80/20 write/read, so write performance is extremely important), much more important than most replica tiering and VSA documents which offload read performance would have you believe.
    3. Should deliver a consistently low latency. The highest physical device command latency a that should EVER be experienced on a host with virtual desktops in my experience is 7ms.  The lower you can get consistently the better.
    4. Should be designed with a random spiky write heavy workload in mind
    5. Does your storage team have time to validate that all desktops are meeting the above requirements all the time? If not, then just push the all flash array performance win button. If you size the church for Easter Sunday, it’s safe to say it will work the rest of the year too.
    6. I really enjoy the hyper converged model for a couple reasons:
      1. Inherently segregated from other disk workloads
      2. The Virtual Desktop team is the storage team
      3. Predictable cost that scales in smaller outlays of cash at around 90 users per host rather than large outlays for additional disk trays, storage nodes, storage fabric ports
      4. Generally cheaper per desktop
      5. If you utilize VMware’s Virtual SAN, here’s a few tips: Minimum 4 nodes, make sure to only use 15K magnetic disks, double check the HCL for all components but specifically the disk controllers and give yourself more SSD space than you think you need. Here’s a great reference architecture if you’re looking at this route.
  3. Do your bake off and functional requirement homework. Select the storage solution that gives you the best performance and meets your requirements with the lowest risk. Speak to customers that have the solutions in prod and have performed code updates. I get that cost is always a factor, but I need to stress again that this storage needs to be rock solid from both a performance and availability standpoint.
  4. Regardless of the solution chosen, storage performance must be validated before going live in production. This means either use View Planner or pony up for LoginVSI. This step gets missed ALL THE TIME. Trust everyone and cut the deck. If you fail on this step you’ll find out once about performance bottlenecks once you already have users in production on the hardware. Trust me when I say you don’t want that.

VM Sizing/Golden Images/Windows Optimization

  1. For all virtual desktop designs I’m composing in the future involving Windows 7, the rule rather than the exception is the Knowledge Worker 2 vCPU 2GB of RAM desktop. Ever since my VCP4 test days, I’ve stuck with the good ole mantra of “Start with a single vCPU and scale up from there”. Well ladies and gentlemen I’ve changed my stance on that for virtual desktops. Think about it this way. You’d get looked at like an idiot if you bought a new PC off the shelf at Best Buy with a single threaded CPU. You can get by with them for SUPER cotton candy workloads and people who are using task worker scenarios, however I’m finding more often that this is definitely the exception rather than the rule, especially as such for desktop replacement use cases. If you have a true task worker use case you can get by with 2 cores per socket, just validate that the IE and local app performance is acceptable. Not to mention that is the PCoIP server service can consume quite a bit of CPU for graphically intensive screen changes, and when that does for example you run the risk of one application (Internet Explorer) spike to 100% impacting the quality of the session and potentially freezing. I based a lot of my sizing recommendations around this document here titled Server and Storage Sizing Guide for Windows 7 Desktops in a Virtual Desktop Infrastructure, which is a decent document, check it out if you’re interested.
  2. I do not recommend that you utilize an x64 Windows VDI image unless there’s a requirement that pushes above the  4GB memory threshold, or other application requirements. The simple reason is that enterprise application compatibility doesn’t always follow the closest to the bleeding edge. Also it generates a ton of confusion for DSN locality, IE versions and corresponding plugins, java versions, complexities with Thinapps, etc. Ask yourself this question: is every end user application in my enterprise compatible with x64 based Windows TODAY? If the answer is yes, fire away. My guess for large organizations is that it isn’t and you will eventually be doing what I have been doing recently and griping every time something is native not 32 and vice versa. COE stands for COMMON operating environment, not uncommon.

Enterprise Application Management

  1. If you don’t have an enterprise application management and client OS strategy, you don’t have a VDI strategy. The VDI is really just another way of accessing the same applications, user data and personalization that would otherwise be available on a physical machine. This means you must determine what app deployment methods you are using, whether locally installed, Thinapp, App-V, XenApp or another app management solution. As a part of this strategy, you must address printing, AppData directories and persistence of the look and feel of the applications and their settings, whether with a profile based products like Liquidware Labs Profile Unity, AppSense or with encompassing products like Unidesk and AppVolumes. Also, you need to account for the biggest problem with stateless desktops: the ability for users to install and persist their own applications on high performing disk. A lot of times this can be something simple like a web plugin, something that helpdesk can handle on a physical desktop no-problem. Depending on the use case may depend on whether or not apps need to be installed by users or “One-Off” installs, or if the Iron fist of IT rules the desktop and approvals and installs apps no matter their size.

Systematic Rollout Plan

  1. You must have a rollout plan, and more importantly YOU MUST ADHERE TO IT. “If you aim at nothing, you’re certain to hit it every time.” If you don’t have an implementation plan you’re destined to fail even with best in class hardware and network performance. However, you can certainly fail after generating a plan by simply not adhering to it. Disclaimer: I work for AdvizeX technologies and several of our Sr. EUC consultants along with myself have together come up with our own methodology for engaging business departments independently and ensuring that all requirements in a predictable, repeatable and documented fashion while ensuring that end user satisfaction is met. Typically determining the requirements is done on a full plan and design, however it’s almost impossible to answer the question: “What does everyone do everywhere” in the timeline that a Plan and Design project gives. Usually guard rails keep Plan and Design projects to several use cases, as decided at the beginning of the engagement. This means that you must have a process when you go to engage a department that wasn’t on your plan and design with application specifics.

These are just some of the tips I’ve picked up on the most recent project that has spanned the last few months. I welcome any open conversation and fearless feedback on the manner, as civil disagreements most often lead to positive change for the better.

Posted in Uncategorized

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 37 other followers