How To Reclaim ESXi VMFS storage with Ubuntu VMs

First of all, zero the space in the guest. You can do that easily with secure-delete. In the Ubuntu guest, this srm command will recursively delete all files and folders, quickly, verbosely, zero the old file and will only do it with one pass.

df -h
sudo apt-get install secure-delete
sudo srm -r -f -v -z -l -l /dir/to/clean/*
df -h

Next, simply vMotion the VM to another datastore with the thin option set. If that’s not an option for you, you’ll need to power off the VM.

In ESXi:

<Power off the VM>

cd /vmfs/volumes/to/your/vm/folder
vmkfstools --punchzero vmdk-to-shrink.vmdk
Posted in Uncategorized

Load Balancing Horizon View with NSX

This post will explain in detail how I was able to set up NSX load balancing in a test environment and was still able to meet the requirements I had for load balancing. The requirements I had for a load balancer for Horizon View were:

  1. As simple as possible
  2. Session performance is acceptable and scalable
  3. Ability to automatically fail nodes so a reconnect gets back in less than one minute
  4. Balance Horizon View SSL, PCoIP and HTML5 Blast session traffic appropriately
  5. The entire solution must be able to leverage a single public IP address for all communication

So there are to primary ways of constructing an NSX load balancer, inline or one-armed. These drawings were borrowed from a VMworld 2014 slide deck.


In this particular example, we opted for one armed, which means the load balancer node will live on the same network segment as the security servers.

Let’s take a look at logically how this is set up in the test lab. You may notice that I used two ports for PCoIP, 4172 and 4173. This is because I wanted to keep it as simple as possible, and with the requirement of a single IP address, keeping the PCoIP listen port for each security server separate was the easiest option. I was able to load balance and persist both 443 and 8443 with NSX, but stepped around the balancing for PCoIP and simply passed it through. In production you can just use another public IP address if you want to keep PCoIP on 4172, then just NAT it to the corresponding security server. If you need more detail on the lab setup, you can check out my previous post which has more detail on the firewall rules used.


General finding notes about the setup before we get into screen shots:

  • The demonstrated configuration is for external connectivity only at present, since persistence matters most when tunneled.
  • SSL passthru is utilized rather than offloading.
  • The DFW is in play here as well, and all firewall rules are in place according to to required communication.
  • In order to configure the security server to listen on 4173, I had to follow the KB and create a reghack on the box, AND configure the tunnel properly in the View GUI.
  • In order to get SSL and BLAST working together from a single VIP, I ended up needing separate virtual servers, pools and service monitoring profiles that all used the same VIP and Application profile.
  • I ended up using ssl_sessionid as the persistence type for both SSL and BLAST, and ended up using IPHASH for the load balancing algorithm.
  • I wasn’t able to configure a monitor to correctly say that 8443 was UP, instead it was in WARN(DOWN) on the 8443 monitor. I opted to simply leverage 443 as the monitor for the blast VIP, so there could potentially be a missed failover if only the blast service was down.

Let’s take a look at the configuration screen shots of the View specific load balancer edge that I used.

NSX-LB2015-09-24 23_17_41

We opted for the simple configuraiton of SSL passthrough and SSL Session ID was used for persistence.

NSX-LB2015-09-24 23_18_01

I found you had to create a separate monitor for each pool for it to function properly, not really sure why.

NSX-LB2015-09-24 23_31_03

You should be able to use the /favicon.ico to determine health as we do with other load balancers, but the plain GET HTTPS worked for me and favicon didn’t.

NSX-LB2015-09-24 23_31_16 NSX-LB2015-09-24 23_31_25

** Important, notice that no ports were specified in the server  pool, only the monitor port. That was specified only in the virtual servers below.

NSX-LB2015-09-24 23_43_47 NSX-LB2015-09-24 23_43_59

NSX-LB2015-09-24 23_45_04

NSX-LB2015-09-24 23_45_13

NSX-LB2015-09-24 23_46_20 NSX-LB2015-09-24 23_46_41

Let’s take a look at the screen shots of the View security server configuration.




If you have to make a security server listen on another port, don’t forget to change it in the registry as well as in the secure gateway URL field. Also don’t forget to let it through the firewall.


And voila! We have a working configuration. Here’s a quick demonstration of the failover time, which can be completely tuned. Of note, we only tested Blast failover, but SSL and PCoIP work just fine as well. The best part is we were able to create a COMPLETELY separate isolated and firewalled security server without even leveraging a DMZ architecture.

If you’re looking for another AWESOME post on how the NSX load balancer is utilized, I learned more from this one by Roi Ben Haim than from the pubs. Great step by step setup that helped me big time.

Posted in Horizon View 6.0, NSX

Horizon View and VMware NSX – Zero Trust Install

Dave Kennedy is a netsec veteran and industry leader in the information security space, has written multiple books on the topic of penetration testing, is the founder of @TrustedSec, wrote the Social Engineering Toolkit, founded @Derbycon and on and on. Point is dude knows how to find a way in and pop a box like a boss. Check out this tweet he had not too long ago.


And with that red underlined statement, in my mind the micro segmentation use-case for VMware NSX speaks for itself. One of the easiest places to get access to from the outside is a user’s desktop, simply because users (often times helpdesk) can be tricked into going to a malicious website which can exploit the user’s desktop. We see this time and time again with not just email attachment, but phone spear phishing as well. Protecting the virtual desktops and servers with the DAPE model (Deny All Permit Exception) is becoming not a nice thing to have, but a near requirement for many businesses. One which security and networking teams are having a hard time doing once inside the perimeter and underneath the DMZ, especially with east west traffic inside defined networks.
It is for this reason I have been working the past few weeks on integrating NSX 6.2 and Horizon View 6.1 in my lab to implement not only zero trust desktops, but an entire zero trust hardened Horizon View Infrastructure. To me, a zero trust network infrastructure is pretty much like getting network access on a “need to have basis”. Unless specifically allowed (by policy), you’re blocked (by policy); not by traditional IP based rules. The pieces I set up in my lab include minimum functional connectivity to service the following components and services:
  • Horizon View Connection Servers
  • Horizon View Security Servers
  • Connectivity with vCenter and Horizon View Composer
  • Virtual Desktops with Identity based Firewall
  • NSX Load Balancers (will do a separate post for this)
Let’s take a look logically at how the lab network is set up. It’s important to remember that while I set these up with NSX based logical networks for scalability, the NSX DFW (distributed firewall) does not require any NSX logical networks to function; the DFW can work just fine even with standard virtual switching because the firewall lives in the ESXi kernel.


Now let’s take a look physically at how this simple lab is set up.


Now let’s talk about some of the View components and what east-west and north-south we firewall rules we conceptually need to make to create to harden the horizon environment, not including user defined rules. The below generic categories are in regards to traffic we need to concern ourselves with. Remember, we are not providing and open IP stack, rather providing only the required ports for the software to function. Most of these ports can be found at the KB which defines View connectivity requirements.
Security Server
View Security Servers <–> HTTPS, PCoIP, Blast
View Security Servers <–> Virtual Desktops
View Security Servers <–> Virtual Desktops
Block the rest
Connection Server
View Connection Server <–> vCenter Server
View Connection Server <–> View Replica Server
View Connection Server <–> Virtual Desktops
View Connection Server <–> AD/DNS
View Connection Server <–> View Security Server
View Connection Server <–> View Administrator Static IPs
Block the rest
Virtual Desktops
Virtual Desktops <–> AD/DNS
Dynamic User Logins <–> Specified Internal Resources over Specified protocols
Block the rest
So now that we’ve seen the categories conceptually, let’s look at the service definition and service groups that I came up with. I broke them up into three basic categories.
  • Active Directory/DNS
  • Horizon View
  • User Defined

Let’s take a look at the active directory rules per several microsoft KB’s and some trial and error.

Custom Active Directory Aliases (NSX individual “Services”)

Custom Horizon View Aliases (NSX individual “Services”)
So we can group up those services into service groups for easy clean looking rules. Let’s take a look at those.
View Connection Server <–> vCenter
View Security Server <–> View Desktops
Security Servers <–> Connection Servers
View Desktops <–> Connection Servers
* Interesting note, at the time of writing with View 6.1.1 TCP 4002 was not a documented port requirement in the network requirements KB between desktops and connection servers and I found that it was definitely needed.
View Desktops <–> AD/DNS
ad1 ad2 ad3
View Connection Server <–> View Connection Server
*Note that even though it’s not documented, ADAM requires the random port range. Screen shot below of the connections…
Even though they’re not documented on the network page, I found most ports for AD (ADAM) communication are required from Connection Server to Connection Server. Was able to find this when I kept banging my head during the second connection server install with Log Insight.
User Defined Security Groups
So first of all you have to integrate the NSX manager with Active directory so that you can pull users and allocate ACLs based on users in AD. Do that under the NSX manager \ Manage section and add the domain.
Once the domain is configured, now we can actually add users into the security groups that we’ll use in the firewall rules. Let’s look at one of the dynamic user examples we’ll go over in this post, View Helpdesk staff that need access to the view admin manager web page and the vCenter web interface, but nothing else. Go to the grouping objects then security group and make a new security group:
Then let’s pick the user group out from AD that we already selected. Note the required menus to select the user group.
And finish after excluding anything you want to make sure is always excluded from the security group, eg, domain controllers, whatever. If directory group is not an option, you need to check your domain integration from the previous step.
So now that we’ve defined the source and destination, service groups and dynamic user groups to be firewalled, let’s check out the firewall rules that tie it all together with some commentary.
I put all the block rules last per norms of DAPE ACL ordering. We can see here that rules 1 & 2 are based on NSX_View_Helpdesk_Access has static access to a couple VMs, View, Log Insight and vCenter via HTTPS.
Most of the security groups for the desktops had dynamic membership based on VM name, such as Win7-, but other filters could be used as well such as OS Windows 7, etc.
Here I specified all the required rules in and out of the VDI infrastructure, including open IP stack from a couple manually specified administrator desktop IP addresses. Notice how nobody even has access to the connection servers except the admins?
These rules block any traffic in or out of either desktops or security servers unless specified above.
Results are good and as intended.
We can see that dynamically the user on the VM was picked up and the VM was added to the rule.
Big Picture Lessons Learned
  1. Knowledge of traffic being dropped by the DFW is paramount, otherwise you’re going to HAVE A BAD TIME, lol (Southpark anyone?). Seriously, though, you need to be able to troubleshoot and loosen ACLs for required traffic and find undocumented port requirements like I had to for View, and they have GREAT network requirement documentation. The best way I came up with to find this information quickly is to log all firewall rules, and then leverage VMware Log Insight with the NSX plugin to filter on drops (installs in like 15 minutes). Made it super simple to figure out what ports were being used between two endpoints. Flow monitoring with NSX 6.2 is another great option to use before putting the rules in place to understand different sources & destinations in play.LogInsight1
  2. With NSX 6.1.4, if VMware tools wasn’t running in the VM and vCenter didn’t know the IP address of the VM, none of the DFW rules took effect with the VM selected as the source. Now with 6.2, DFW rules apply whether or not vCenter is aware of the VM ip address. The problem was easy enough to fix with a IP subnet based block catch-all rule, but the point of policy driven access is getting away from static configurations.
  3. If you change user group membership in AD, you must resync the AD deltas from within the vSphere web client for that to take effect immediately.
  4. With NSX 6.2, there is a drop-down option when creating the firewall rules to block traffic in/out of the source destination. I found that to completely block traffic in both directions, two rules were required with source destination reversed.

I would absolutely love to give y’all an export of the rules so you could just import them, but sadly it’s not that easy to import them once the XML is exported. As soon as Tristan Todd’s rumored NSX rule import fling is available, I will post a CSV with all the Services, Service Groups and Firewall Rules so you can start with what I put together and automatically import them.

I hope this was helpful for you guys! Next up, I’ll be introducing security server load balancing with NSX, followed by AppVolumes integration, Trend Micro Deep Security AV and hopefully more.

Posted in Horizon View 6.0, NSX, View

How to configure PERC H730 RAID Cards for VMware VSAN

Hey All, just wanted to write up a quick post with pics on how to configure the H730 RAID card for VMware VSAN.

First of all, the H730 is on the HCL and is a great choice due to the ample queue depth of 895 as determined by esxtop.


The following was how we configured it per the configuration guide best practices with:

  • Passthrough mode
  • Controller caching disabled
  • BIOS Mode pause on error (rather than stop)

Per the guide:


Just figured this might help some other wayward souls out there. : )


raid_03 raid_04 raid_05 raid_06


Tagged with:
Posted in Uncategorized

Bobby Boucher, persistent virtual desktops ARE THE DEVIL!


No they’re not. In fact, most desktops are persistent, they’re just not virtual. Other industry experts have already had this discussion many times and I think it’s my turn to sound off.

I have to be completely honest, I’m just plain tired of the headache that non-persistent just ends up causing from a management perspective, especially when a group of users is  placed in the wrong use case bucket because the proper strategic use-case analysis wasn’t performed in the first place. Sometimes that stateless headache is totally worth it, and other times it isn’t. The concept of deleting a user’s desktop when they log off is noble from a clean and tidy “fresh desktop” configuration standpoint, but it also inherently causes a number of different issues that plenty of customers just can’t seem to get past in the long haul, especially if use case is incorrectly assessed initially. It takes a lot of administrative work to have a near-persistent auto-refreshing desktop experience, so let’s just break this down for a second with some simple pros and cons for each.

Non/Near Persistent Desktops


  • Can meet very low RTO/RPO disaster recovery requirements with abstracted user files/app data
  • In environments with rolling concurrent session counts it can reduce overall costs
  • Will allow for the fastest provisioning of desktops (especially with project Fargo)
  • Storage capacity requirements can be reduced because of natively thin desktops
  • Logging onto recently refreshed desktops ensure a consistent operating environment
  • Fits a large majority of task worker and non-desktop replacement use cases extremely well


  • A UEM is required to persist application data and user files, staff must learn and become comfortable with the tool
  • Application delivery and updates must (for the most part) be abstracted and delivered and updated quickly. This often times requires additional software and training (App-V, AppVolumes, Thinapp, etc)
  • Any appdata problems that persist in the profile completely negate the “fresh desktop” stateless desktop idea
  • Does not typically integrate well with traditional desktop management solutions such as SCCM, LANDesk and Altiris
  • The smallest configuration change for a single user in a stateless desktop can result in hours of work for the VDI admin
  • Applications that require a non-redirected local data cache must be re-cached at each login or painfully synched via the UEM

Persistent Desktops


  • Users live on their desktop, not partly on a share and partly in a desktop
  • Consistent experience; the smallest changes (including those performed by the helpdesk) unquestionably persist
  • Simple to administer, provision and allocate
  • Fits all desktop replacement use cases extremely well
  • Desktop administrators do not need to introduce new management tools
  • The user pretty much never has to be kicked out of their desktop except for reboots required by patches, software installs


  • Any disaster recovery requirements will add a lot of complexity, especially without OTV
  • Non-metrocluster persistent desktop DR plans typically dictate 2 hour+ RTO for environments at scale
  • Persistent is resource overkill for the task worker and occasional use non-desktop replacement use cases
  • Can be cost prohibitive for rolling concurrent session counts it can increase overall infrastructure costs to give 1:1
  • Year 2 operations can fail to be met; operational “this person left 2 years ago and is still on his desktop” type problems
  • Relies on existing application distribution expertise
  • Problems in the desktop must be handled with traditional troubleshooting techniques
  • Without proper storage infrastructure sizing underneath, simple things like definition updates or a zero day patch can make an entire environment unusable

My big picture take aways:

Perform strategic business level EUC/mobility use case analysis up front. If an optimized method of delivering EUC is realized, select the best combination of desktop and/or server and software virtualization for the business. Once the technology has been selected, do the detailed plan and design for the chosen technology. Don’t slam both the strategic business analysis and the product design together because you need take a good hard look at what you actually need and where you can optimize before you put a blueprint together. As a part of the business use case analysis, ensure to get sign off on the virtual desktop availability and recoverability requirements. The outcome of such an analysis should provide information in regards to which specific departments will have a great outcomes with a virtualized desktop experience and which ones have less or no value.

Are you building an infra that will let you provision virtual desktops with amazing speed and IOPS galore? Great! That’s not the only goal. Make sure it’s merely a dependable component of a solution that is manageable and meets the business requirements.

Going with a non-persistent desktop to save on storage space is ludicrous, especially with the prevalence of high performing storage platforms with block level deduplication available today and the amazing low TCO of HCI.

The idea that a non-persistent desktop is easier to manage because of the available tools is complete nonsense. There are risks and rewards for both choices. For example, there are plenty of medical use cases that are terrific fits for non-persistent desktop because of the lack of personalization and individuality required; most of the time the staff members use kiosks so a generic desktop is nothing new to them. However, take a senior corporate finance officer who has a critical Access 97 database with custom macros and software dependencies that took a VBA consultant 1 year to write, and who also has more manually installed applications his desktop support has ever seen and you’re going to be a sad person if you put then on a non-persistent desktop.

If you ask me, once the DR complexity for persistent desktops is fixed, it will drastically shift the conversation again.

As usual, if there are any pros and cons I forgot, feel free to sound off, argue, high five me, or not.








Posted in Uncategorized

The Year of the VDI

STOPI seriously cannot stand the phrase “The Year of the VDI”, and I’m pretty sure most of you reading this might be here because you feel the same way. In the past the notion has been very annoying to me because at each step we’ve solved portions of the infrastructure and application delivery problems, that now is the year for us to relax and push a giant VDI easy button and this will be the year that VDI finally “takes off” like it never has before. VDI is definitely increasing with adoption, with a large part of that due to the fact that more people are actually in the pool (HAH, no pun intended) rather than sitting on the pool chairs wondering if it’s actually a good idea to go swimming.

I seem to be having this conversation with customers more and more in regards to use cases. Let’s start with the basics about what a technology use case really is. I usually consider a use case to be how a technology can solve a specific business problem or a specific business enablement or technology optimization. A virtual desktop use case is not a compilation of user data in regards to OS, applications, devices, and desktop resource requirements. A good virtual desktop use case is a set of conditions which end up with a positive, efficient, rapid delivery EUC experience for the end user, and decreased operational overhead for IT. Simply put, VDI doesn’t fit everywhere, it’s the exact reason we should be providing a full palette of desktop delivery options and each has a supporting group of folks. For the purposes of this article, I don’t even take alternative presentation delivery methods into account, such as XenApp or RDSH, which can be combined with any of the options below.

  1. Physical PC
  2. Persistent Desktop
  3. Near Persistent Desktop
  4. Non-Persistent Desktop

For example, a beefy physical PC is a requirement for some engineers that require offline access to large drawings. It’s also a requirement for someone like myself who is a delivery consultant who travels and integrates with many different networks and customer environments. There are other examples of each use case I could give, but you guys get it. The point is the operational and functional requirements for each department, IT organization, security division are different. Knowing which bucket or combination of buckets to put the users in is a paramount skill that the project team must have.

Brian Madden has it spot on when he says that executives pretty much care about desktops just as much as they care about desks, lamps, office chairs, office space, cell phones; it’s just a part of the necessary equipment that employees need to get their job done. When a VDI project goes sideways, the operational value-add can often be questioned by the executive layer. For this reason, I don’t think there will ever be “A Year of the VDI”.  Virtual desktop projects take the problems that you know and love on each individual piece of hardware desktop hardware and MOVE them to the datacenter, not remove them. For some use cases, they remove way more problems than they introduce, and for some use cases it’s vice versa.

There are some aspects of desktop management that VDI simplifies drastically, there are other things that it complicates and makes more work for the administrative team that didn’t exist otherwise, specifically any involvement of any UEM product. The year of the VDI? How about we take a year to learn how to manage traditional desktops first before we tackle the virtualization of all our desktop applications and operating systems. Here’s a screenshot of Netmarketshare’s desktop OS data, August, 2014 to June, 2015. Anybody notice that Windows XP actually gained market share at the end of last year and since then has only slowly tricked down?


For the shops that have it together and can place users in the correct use case buckets, keep it up. For the shops still leveraging outdated desktop operating systems, you need to take a long hard think about why you’re doing what you’re doing from an SDLC perspective. In my opinion, VDI will continue to gain adoption, but will continue to increase at a pace slower than server virtualization did. Capex benefits which were readily realized with server virtualization and consolidation, but VDI is primarily opex savings which is more difficult to measure and less justifiable to executives without an enablement initiative.

So what is my recommendation? Stop thinking about how to deliver desktops and focus on the delivery of Windows and applications, then work backwards to a capable solution needed to provide the services to the business, whether the users are leveraging physical desktops, virtual desktops or virtual servers. At that point you should start to see the use cases fall together in a much clearer fashion.

Posted in Uncategorized

AppVolumes 2.9 – Near 0 RTO Multi-Datacenter Design Options

So this is more of me just thinking out loud on how this could be accomplished. Any thoughts or comments would be appreciated. Check Dale’s blog “VMware App Volumes Multi-vCenter and Multi-Site Deployments” for a great pre-read resource.

The requirements for this solution I’m working on are for a highly available single AppVolumes instance between two highly available datacenters with a 100% VMware View non-persistent deployment. A large constraint is that block level active/active storage replication is desired to be avoided, which is largely why UIA writable volumes is such a large piece of the puzzle; we need another way to replicate the UIA persistence without leveraging persistent desktops. CPA will be leveraged for global entitlements and home sites to ensure users only ever connect to one datacenter unless there is a datacenter failure. AppVolumes and Writable UIA volumes are to be utilized to get the near persistent experience. There can be no loss functionality (including UIA applications on writable volumes) for any user if a datacenter completely fails. RTO = near 0, RPO = 24 Hours for UIA data. In this case assume <1ms latency and dark fiber bandwidth available between sites.

Another important point is that as of AppVolumes 2.9, if leveraging a multi-vCenter deployment, the recommended scale per the release notes is 1,700 concurrent connections per AppVolumes Manager, so I bumped up my number to 3 per side.

So, let’s talk about the options I’ve come up with leveraging tools in the VMware toolchest.

  • Option 1a: Leveraging AppVolumes storage groups with replication
  • Option 1b: Leveraging AppVolumes storage groups with replication and transfer storage
  • Option 2: Use home sites and In-Guest VHD with DFS replication
  • Option 3a: Use home sites and native storage replication
  • Option 3b: Use home sites and SCP/PowerCLI scripted replication for writable volumes, manual replication for appstacks
  • Option 4: Use a stretched cluster with active/active storage across the two datacenters

Option 1a: Leveraging AppVolumes Storage groups with Replication



The problem is AppVolumes does not allow storage group replication two different vCenter servers, and having multiple Pod vCenters is a requirement for the Always-On architecture. I read that in this community post, the error message in the System Messages reads: “Cannot copy files between different vCenters”, and a fellow technologist posted up recently and said he tried it with 2.9 and the error message still exists. So in short, Dale’s blog remains true in that storage groups can only replicate within a single vCenter, so this isn’t a valid option for multi pod View.

Option 1b: Leveraging AppVolumes Storage groups with Replication and Transfer Storage




So that’s a another pretty good workaround option. Requires a native storage connectivity between the two pods, which means there must be a lot of available bandwidth and low latency between the sites, as some users in POD2 might actually be leveraging storage in POD1 during production use. Providing you can get native storage connectivity between the two sites, this is a very doable option. In the event of a failure of POD1, the remaining datastores in POD2 will sufficiently handle the storage requests. Seems like this is actually a pretty darn good option.

** EDIT: Except that storage groups will ONLY replicate AppStacks, not writable volumes per a test in my lab, so that won’t work. Also there’s this KB stating that to even move a writable volume from one location to another, you need to copy it, delete it from the manager, re-import it and re-assign it. Not exactly runbook friendly when dealing in the thousands.

Option 2: Use Home Sites and In-Guest VHD with DFS Replication



The problem with option 2 is In-Guest VHD seems like a less favorable option given that block storage should be faster than SMB network based storage, particularly when the software was written primarily to leverage hypervisor integration. I’m also not sure this configuration will even work. When I tried it in my lab, the AppVolumes agent kept tagging itself as a vCenter type of endpoint, even though I wanted it to use In-Guest mode. Tried to force it to VHD without luck. So I’m pretty sure this option as well is out for now, but the jury is still out on how smart this configuration would be compared to other options. Leveraging DFS seems pretty sweet though, the automatic replication for both AppStacks and Writables seems like a very elegant solution, if it works.

Option 3a: Use Home Sites and native storage replication


The problem with option 3 is the RTO isn’t met and it adds recovery complexity. From an AppVolumes perspective it is the easiest to carry out, as the writable volume LUNs can be replicated and the AppStack LUNs can be copied manually via SCP or the datastore browser without automatic replication. As it stands right now, I believe this is the first supported and functional configuration in the list, but the near 0 RTO isn’t met due to storage failover orchestration and appvolumes reconfiguration.

The aforementioned KB titled Moving writable volumes to new location in AppVolumes and some extra lab time indicates to me that in order to actually fail over properly, the Writable volume (since it is supposed to exist in one place only, and the datastore location is tied to the AppVolumes configuration in the database) would need to be deleted, imported and reassigned for every user of writable volumes. Not exactly a quick and easy recovery runbook.

Option 3b: Use Home Sites and SCP/PowerCLI scripted replication for writable volumes, manual replication for appstacks

This is basically option 3, but instead of leveraging block level replication, we could try to use an SCP/PowerCLI script that would copy only the writable volumes and metadata files from one vCenter to the other in one direction. Pretty sure this would just be disastrous. Not recommended.

Option 4: Use a stretched cluster with active/active storage across the two datacenters for all app stacks and writable volumes


This pretty much defeats the whole point, because if we’re going to leverage a metrocluster we can simply protect persistent desktops with the same active/active technology and the conversation is over. It does add complexity because a separate entire pod must be leveraged so that the vCenter/Connection servers can failover with the desktops, which is a large amount of added complexity. A simplified config would be just to leverage active/active storage replication for the AppVolumes storage, but that’s super expensive for such a small return. At that point might as well put the whole thing on stretched clusters.

EDIT: But then again, near zero RTO dictates synchronous replication for the persistence. If the users’ UIA applications cannot be persisted via writable volumes, they must be persisted via Persistent desktops, which will end up requiring the same storage solution. I’ve pretty much ended up back here, which is right where I started from a design standpoint.

So! If any of you guys have any input I’d be interested to hear about it. The real kicker here is trying to avoid the complexity block level active/active storage configuration. If we remove that constraint or the writable volumes from the equation, this whole thing becomes a lot easier.

EDIT: The active/active complexity for near 0 RTO can’t be avoided just yet in my opinion. At least with option 4 we would only be replicating VMDK files instead of full virtual machines. To me that’s at least a little simpler, because we don’t have to fail over connection brokers and a separate vCenter server just for managing those persistent desktops.

Posted in Uncategorized
  • Especially with vSphere 6, would love to see an easy process for moving persistent desktops to another Pod.… 6 hours ago
  • Traveling for work the day before thanksgiving is not recommended. That is all. 5 days ago
  • Good luck getting that CBT fix through change control this afternoon, methinks the change team might have a turkey freeze. 5 days ago

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 21 other followers


Get every new post delivered to your Inbox.