LAN in a CAN 1.0 – VMware ESXi, Multi-WAN pfSense with QoS, Steam Caching, Game Servers

The goal of this blog post is to highlight one of my “off the clock” creations that has faithfully serviced 4 LAN parties and counting. I wanted to share the recipe I used; I call it LAN-in-a-CAN. The goal of this project was to simply provide a portable platform which could handle LAN parties of up to 50 guests in an easy to use and quick to set up configuration, complete with download caching for CDN networks, pre-validated QoS rules to ensure smooth gaming traffic and a virtual game server capable of hosting any game servers. Everyone in the Ohio and surrounding area, I invite you to come check our LAN out, you can find details at www.forgelan.com.

BIG PICTURE: Roll in, plug in internet, LAN Party GO! Less headache, more time having fun!

intro

First of all, massive props to the following:

  • All the guys at Multiplay.co.uk and their groundbreaking work on the LAN caching config
  • All the guys investing time in pfSense development
  • @Khrainos from DGSLAN for the pfSense help
  • Sideout from NexusLAN for the Posts and Help

Pictures:

IMG_1008 IMG_1006

IMG_1002

IMG_1004 IMG_1005

 

Physical Components:

Case: SKB 4U Rolling Rack ~$100

Network Switch: (2x) Dell PowerConnect 5224 (x2) 24 port Gig Switch ~$70 each

Server Operating System: VMware ESXi 5.5 U2 Standalone – Free

Server Chassis: iStarUSA D-213-MATX Black Metal/ Aluminum 2U Rackmount microATX Server Chassis – OEM ~$70

Server Motherboard: ASUS P8Z77-M LGA 1155 Intel Z77 HDMI SATA 6Gb/s USB 3.0 Micro ATX Intel Motherboard ~$90 (DISCONTINUED)

Server Memory: G.SKILL Ripjaws Series 32GB (4x 8GB DIMMS) ~$240

Server CPU: Intel Core i3-3250 Ivy Bridge Dual-Core 3.5GHz LGA 1155 55W Desktop Processor ~$120

Server PCI-Express NICS: (3x) Rosewill RNG-407 – PCI-Express Dual Port Gigabit Ethernet Network Adapter – 2 x RJ45 ~$35 each

Server Magnetic Hard Drive: Seagate Barracuda STBD2000101 2TB 7200 RPM 64MB Cache SATA 6.0Gb/s 3.5″ Internal Hard Drive -Retail kit ~$90

Server SSD Hard Drive: (2x) SAMSUNG 850 Pro Series MZ-7KE256BW 2.5″ 256GB SATA III 3-D Vertical Internal Solid State Drive (SSD) ~$340

Wireless Access Point: $35

Total Cost: ~$1350 with Shipping

My comments on the configuration at present:

When you think about it….it’s not much more than a PC build. As cool as it looks, this is a garage sale lame-oh build. I was able to save some cash by repurposing older hardware I had laying around such as the CPU, memory, motherboard and hard drives. When examining the scalability of the solution, the first thing that would have to change is the CPU clock speed & core count, followed immediately by a decent RAID controller. If it was required to service LANs larger than 50-70 in attendance, we’d probably want to change the layout so that the caching server would have its own physical box.

Virtual Components:

pfSense Virtual Machine

Purpose: Serves DNS, DHCP, QoS, ISP connectivity and routing.

Operating System: FreeBSD 8.3

Virtual vCPU: 2

Virtual Memory: 2GB

Virtual NIC: 3 vNICS, 2 for WAN connectivity, 1 for LAN connectivity

Virtual Hard Drive: 60GB located on 2TB slow disk

Windows Game Server (Could Also be Ubuntu if Linux Savvy)

Purpose: Serves up all the local game servers, LAN web content, teamspeak, and is a TeamViewer point of management into the system remotely

Operating System: Windows Server

Virtual vCPU: 2

Virtual Memory: 4GB

Virtual NIC: 1 for LAN connectivity

Virtual Hard Drive: 400GB located on 2TB slow disk

Nginx Caching Server

Purpose: Caches data for Steam, Blizzard, RIOT and Origin CDN networks. I’ve only gotten Steam to actually cache properly. The others proxy successfully, but do not cache.

Operating System: Ubuntu Server 14.04, nginx 1.7.2

Virtual vCPU: 4

Virtual Memory: 12GB

Virtual NIC: 8 for LAN connectivity

Virtual Hard Drive: 1x 80GB on slow disk, 2x 250GB disks, one on each Samsung 850. ZFS stripe across both disks which holds the cache

Download Box

Purpose: Windows box to pre-cache updates before the LAN starts. Allows multiple people to log in teamviewer, then into steam and seed all relevant games into the cache.

Operating System: Windows Desktop OS

Virtual vCPU: 1

Virtual Memory: 1GB

Virtual NIC: 1 for LAN connectivity

Virtual Hard Drive: 400GB located on 2TB slow disk

Big Picture Network and Caching Advice:

  • pfSense and Network
    • Use Multi-WAN configuration (yes, that means two ISPs) so that all proxied download traffic via the caching box goes out one WAN unthrottled, and all other traffic goes out a different WAN throttled. 2 WANs for the WIN. If you only have one WAN ISP connection, then just limit the available bandwidth to the steam caching server.
    • Use per IP limits to ensure that all endpoints are limited to 2Mbps or another reasonable number per your total available bandwidth. QoSing the individual gaming traffic into queues proved to be too nuanced for me, it was just plain easier and more consistent to grant individual bandwidth maximums. This doesn’t count the proxied and cached downloads, which should be handled separately per the above bullet point.
    • Burst still isn’t working properly with the limiter config, apparently not an easy fix. Bug report here: https://redmine.pfsense.org/issues/3933
    • DNS spoofing is required here which can be done out of the box using the DNS forwarder on pfSense
    • We made the config work once on a 5/1 DSL link, but it was horrible. Once we at least had a 35/5 cable connection everything worked great.
    • The PowerConnect 5224 just doesn’t have enough available ports to fully service all the connectivity required. While it does allow you to create up to 6 LAGs with up to 4 links each, it basically means that there aren’t enough ports to have a main table, plus 2 large tables with enough LAGs in-between them. I recently purchased a used Dell 2748 to replace the 5224 at the core in the rack for around $100, and am looking forward to updating it.
  • Caching Box
    • Future state for big lans, this would be a separate box like the guys at Multiplay.co.uk have laid out, 8x 1TB SSD in a ZFS RAID, 192GB memory with 10Gb networking, and dual 12 core procs. i3 procs with Samsung Pros seems to be working OK for us, but that’s because of the small scale. Would definitely need a couple hoss boxes at scale.
    • When using multiple disks and ESXi, don’t simply create a VMFS datastore with extents, as the data will mostly be written to one of them and IO will not be staggered evenly. For this reason I opted to present two virtual hard drives to the virtual machine and let ZFS stripe it at the OS.
    • Created a NIC and IP for each service that I was caching for simplicity sake.
    • There are some Nginx improvements coming that will natively allow caching for Blizzard, Origin, which don’t work today due to the random range requests, which can’t be partially cached. See Steven’s forum post here for updates: http://forum.nginx.org/read.php?29,255315,255324
    • CLI tips:
    • Utilize nload to display NIC in/out traffic
      • sudo apt-get install nload
      • sudo nload -U G -u M -i 102400 -o 102400
    • Log directories to tail for hit/miss logs
      • cd /data/www/logs
      • tail -f /location_of_log/name_of_log.log
    • Storage Disk Sizes & Capacity with Human Readable Values
      • df -h

Achievable Performance:

Single Portal 2 Download to a Mac with SSD drive: 68MBps! Whaaaaat??? Ahamazing.

Portal2Download

 

Was able to push 94MBps of storage throughput at a small miniLAN with 8 people downloading portal 2 at the same time. Those 8 people downloaded portal 2 in about 10 minutes total.


forgelanstorageesxi

It’s been a great project, and I’m excited for the future, especially with the up coming nginx modules that will hopefully provide partial cache capability.

Posted in Uncategorized

Installing ESXi 6.0 with NVIDIA Card Gives Fatal Error 10: Out of Resources

Thanks much to the original author of this post in the Cisco community, I was able to find the answer quickly. Figured I’d put it here in case anyone else hits it as well since the error message is a little off.

https://communities.cisco.com/community/technology/datacenter/ucs_management/blog/2014/02/17/error-loading-toolst00-fatal-error-10-out-of-resources-cisco-ucs-c240-m3-server-with-esxi-55

Decompressed MD5: 00000000000000000000000000000000

Fatal Error: 10 (Out of Resources)

Fatal

fatal2

 

Change the following MMCFG base parameter to 2 and you can install AOK.

Posted in Uncategorized

Horizon Workspace 2.1 – Logon Loop after Joining AD Domain

Turns out that if you join Horizon Workspace to the AD domain which uses a different URL namespace such as “pyro.local” than what you’d actually publish, such as “workspace.pyro.com” you actually have to modify the local hosts file of the appliance otherwise you’ll end up in an endless loop of frustration like I was for the past several hours. The worst part is that the config saves just fine, but now authentication no longer works, leaving you puzzled and sad. Just goes to show you, read the release notes. Hopefully this helps someone faster than google did previously for me.

Screen shots below. Release notes which apparently I didn’t take time to read state:

1a

 

When joining the AD domain, you’ll get this. Which is fine, except people on the outside can’t resolve that address.

1bNow it’s off to the vi to add us a hosts entry. I had to login as sshuser then sudo to root. 
3 3aFor those of you not fluent with the VI text editor, which is myself included, just hit I to insert, type what you need to type then hit escape to stop inserting text.

After you do that hit ZZ to save and write the changes to the file. Lo and behold! You can resolve your external address and you don’t have a sad loop. It’s a Christmas miracle. What a rawr.

 

 

 

Posted in Uncategorized

Adding Microseconds to vRealize Operations Graphs

In my old role as a vSphere administrator for a single company, we upgraded our storage from legacy spinning disk with a small amount of cache. This environment often experienced disk latency greater than 3ms on average and often time much higher than that, getting up to 20ms or higher for 30 seconds at a time, over a thousand times a day. We upgraded to a hybrid array with results of less than a millisecond latency for read and write operations…consistently.

vCenter Performance

We tracked this with vCenter as well as vCenter Operations Manager. In vCenter, it’s tracked on a per virtual machine instance. We could monitor this by using the vCenter performance tab on a virtual machine and selecting Virtual Disk and monitoring the read and write latency numbers for both Milliseconds and Microseconds, as seen below.

Monitoring in vCenter was great, but it’s only available for live data, or the last hour. That doesn’t help in seeing history. We fixed this by looking into vCenter Ops and in our installation, the numbers were there. Great…history. We can now use this latency number as justification for our new storage selection.

Fast forward to my current role, as a consultant. I recently installed vRealize Operations Manager for a customer. One of the reasons was to monitor disk performance as they evaluated new storage platforms for their virtual desktop environment. As I looked into vCenter, the counters were there, but when I looked into vRealize Ops the counter wasn’t available. What gives?

After contacting my previous coworker, we compared configurations and neither one of us could figure it out. I already had a case open with VMware regarding the vRealize Ops install for an unrelated issue. On a recent call with VMware support, we compared these two vCenter/vRealize Operations manager environments. It really stumped the support engineer as well. After about five minutes, we figured it out. When vCenter Operations Manager was installed, the selection to gather all metrics was selected. vRealize Operations Manager doesn’t give you this options. Instead you need to change the policy.

Here’s How:
Log into vRealize Operations Manager, select a VM, and select the troubleshooting tab. Take a note of the policy that is effect for the VM…if you haven’t tweaked with vROPs, then the policy will be the same for all objects.

vRealize Policy

Click on this policy, this will link to the administration portion of vROPs focused on the “Policies” section.

vRealize Policy List

Click on the “Policy Library” tab. Drop down the “Base Setting” group, and select the proper policy that was at the top of the virtual machine troubleshooting tab. Then click not the edit pencil.

vRealize Operations Edit Policy

Once in the “Edit Monitoring Policy” window (shown below), limit the Object type to “Virtual Machine” and filter on “Latency”. This will reduce the object to a single page.

vRealize Override Attributes

Change the “state” for the Virtual Disk|Read Latency (microseconds) and Virtual Disk|Write Latency (microseconds) to “Local” with a green checkmark.

vRealize Operations Attribute Changes

After about 5 minutes, verify that the data is now displayed in vRealize Ops.

vRealize Operations Graph

 

Why is this counter important?
As SSD arrays like Pure Storage and XtremIO or hybrid arrays like Tintri become more prevalent, millisecond latency numbers are near 0 and result in a single flat line and no real data, but microseconds really allows administrators to see any changes in latency that would otherwise be flattened out.

I’d like to thank El Gwhoppo for allowing me to guest author. I hope this tidbit is useful information!

Tagged with: ,
Posted in vCenter, VMware

How to Extend XtremIO Volumes

Want to extend an XtremIO volume? Simply right click on it and say modify volume. Pretty darn easy. Was somewhat annoyed to find that this information isn’t exactly readily available from “the Google”, so I decided to write this up after verifying it is a non-destructive process from my EMC home boys. This was performed on an XtremIO running on the 3.0.0 build 44 code. Not sure if it will work for shrinking a volume, but perhaps someone can comment up with that information.

xio1 xio2 xio3 xio4

 

 

Posted in Uncategorized

UCS, View, Virtual SAN Reference Architecture: Cool! But did you think about…

I really appreciate the article titled “Myth versus Math: Setting the Record Straight on Virtual SAN” as written by Jim Armstrong. He highlights some of the same points I’m going to make in this post. Virtual SAN is a very scalable solution and can be configured to meet many different types of workloads. Also he notes that reference architectures are a starting point, not complete solutions. I am a big fan of hyper-converged architectures such as Virtual SAN. I’m also a big fan of Cisco based server solutions. However, I’ve also had to correct underperforming designs that were based entirely off of reference architectures, and as such I want to ensure that all the assumptions are accounted for with a specific design.  One of the reasons I’m writing this is because I have recently delivered a Horizon View Plan and Design which included a Virtual SAN on Cisco hardware, so I wrestled with these assumptions not too long ago. The secondary reason is because this architecture utilizes “Virtual SAN Ready Nodes”, specifically as designed for virtual desktop workloads. Using this vernacular brings trust and immediate, “Oh neat no thinking required for sizing” to most, but I would argue you still have to put on your thinking cap and make sure you are meeting the big picture solution requirements, especially for a greenfield deployment.

The reference architecture that I recently read is linked below.

http://blogs.vmware.com/tap/2015/01/horizon-6-virtual-san-cisco-ucs-reference-architecture.html

http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/desktop-virtualization-solutions-vmware-horizon-view/whitepaper_C11-733480.pdf

Remember, reference architectures establish what is possible with the technology, not necessarily what will work in your house. Let’s go through the assumptions in the above setup now.

Assumption #1: Virtual Desktop Sizing Minimums are OK

Kind of a “duh”. Nobody is going to use big desktop VMs for a reference architecture because the consolidation ratios are usually worse. If you read this article titled “Server and Storage Sizing Guide for Windows 7 Desktops in a Virtual Desktop Infrastructure” it highlights minimum and recommended configurations for Windows 7, which includes the below table.

Win7Recommended

Some (not all) of the linked clones were configured in this reference utilize 1.5GB 1 vCPU which is below the recommended value. I’ve moved away from the concept of starting with 1 vCPU for virtual desktops regardless of the underlying clock speed, simply due to the fact that a multi-threaded desktop performs better given that the underlying resources can satisfy the requirements. I have documented this in a few previous blog posts, namely “Speeding Up VMware View Logon Times with Windows 7 and Imprivata OneSign” and “VMware Horizon View: A VCDX-Desktop’s Design Philosophy Update“. In the logon times post, I found that simply bumping from 1 vCPU to 2 reduced 10 seconds in the logon time in a vanilla configuration.

Recommendation: Start with 2 vCPU and at least 2GB from a matter of practice with Windows 7 regardless of virtual desktop type. You can consider bumping some 32 bit use cases down to 1 vCPU if you can reduce the in-guest CPU utilization by doing things like offloading PCoIP with an Apex card, or utilizing a host based vShield enabled AV protection instead of in-guest. Still, you must adhere to the project and organizational software requirements. Size in accordance with your desktop assessment and also POC your applications in VMs sized per your specs. Do not over commit memory.

Assumption #2: Average Disk Latency of 15ms is Acceptable

This reference architecture assumes from a big picture that 15ms of disk latency is acceptable, per the conclusion. My experiences have shown me that between 7ms of average latency is the most I’m willing to risk on behalf of a customer, with the obvious statement “the lower the better”. Let’s examine the 800 linked clone desktop latency. In that example, the average host latency was 14.966ms. Most often times requirement #1 on VDI projects I’m engaged on is acceptable desktop performance as perceived from the end user. In this case, perhaps more disk groups, more magnetic disks or smaller and faster magnetic disks should have been utilized to service the load with reduced latency.

Recommendation: Just because it’s a hyper converged “Virtual SAN Ready Node” doesn’t mean you’re off the hook for the operational design validation. Perform your View Planner or LoginVSI runs and record recompose times on the config before moving to production. When sizing, try to build so that average latency is at or below 7ms (my recommendation), or whatever latency you’re comfortable with. It’s well known that Virtual SAN cache should be at least 10% of the disk group capacity, but hey, nobody will yell at you for doing 20%. I also recommend the use of smaller 15K drives when possible in a Virtual SAN for VDI, since they can handle more IO per spindle, which fits better with linked clones. Need more IO? Time for another disk group per host and/or faster disk. Big picture? You don’t want to dance with the line of “enough versus not enough storage performance” to meet the requirements in a VDI. Size appropriately in accordance with your desktop assessment, beat it up in pre-prod and record performance levels before users are on it. Or else they’ll beat you up.

Assumption #3: You will be handling N+1 with another server not pictured

In the configuration used, N+1 is not accounted for. Per the norm in most reference architectures, it’s taken straight to the limit without thought for failure or maintenance. 800 desktops, 8 hosts. The supported maximum is 100 Desktops/Host on Virtual SAN due to the maximum amount of objects supported per host (3000) and the amount of objects and linked clones generate. So while you can run 800 desktops on 8 hosts, you are officially at capacity. It is noted that several failure techniques were demonstrated in the reference, however the failure of an entire host was not one of them. Simulated failure of SSD Disk, magnetic disk and network were tested but an entire host failure was not simulated.

Recommendation: Size for at least N+1. Ensure you’re taking 10% of the pCores away when running your pCores/vCPU consolidation ratios for desktops to account for the 10% Virtual SAN CPU overhead. Ensure during operational design validation that the failure of a host including a data resync does not increase disk workload in such a fashion that it negatively impacts the experience. In short, power off a host in the middle of a 4 hour View Planner run and watch the vsan.observer for disk latency one hour later during the resync.

Assumption #4: Default policy of FTT=0 for Linked Clones is Acceptable

I disagree with the mindset that FTT=0 is acceptable for linked clones. In my opinion it provides a low quality configuration as the failure of a single disk group can result in lost virtual desktops. The argument in favor of FTT=0 for stateless desktops is “So what? If they’re stateless, we can afford to lose the linked clone disks, all the user data is stored on the network!” Well that’s an assumption and a fairly large shift in thinking especially when trying to pry someone away from the incumbent monolithic array mentality. The difference here is that the availability of the storage on which the desktops reside is directly correlated with the availability of the hosts. If one of the hosts restarts unexpectedly, with FTT=0 the data isn’t really lost, just unavailable while the host is rebooting. However if a disk group suffers data loss, it makes a giant mess of a large number of desktops that you end up having to clean up manually, since the desktops are unable to be removed from the datastore in an automated fashion by View Composer. There’s a reason the KB which describes how to clean up orphaned linked clone desktops is complete with two YouTube videos. This is something I’ve unfortunately done many times, and wouldn’t wish upon anyone. Another point of design concern is that if we’re going to switch the linked clone storage policy from FTT=0 to FTT=1, now we’re inducing a more IO operations which must be accounted for across the cluster during sizing. Another point of design concern is that without FTT=1, the loss of a single host will result in some desktops that get restarted by HA, but also some that may only have their disk yanked. I would rather know for certain that the unexpected reboot of a single host will result only in the state change of desktops located on that host, not a state change of some desktops and the storage availability for others called into question. The big question I have in regards to this default policy is: Why would anyone move to a model that introduces a SPOF on purpose? Most requirements dictate removal of SPOFs wherever possible.

Recommendation: For the highest availability, plan for and utilize FTT=1 for both linked clone and full desktops when using Virtual SAN.

The default linked clone storage policy is pictured below as installed with Horizon View 6.

FTT0LC

Assumption #5: You’re going to use CBRC and have accounted for that in your host sizing

Recommendation: Ensure that you take another 2GB of memory off each host when sizing to allow for the CBRC (content based read cache) feature to be enabled.

That’s all I got. No disrespect is meant to the team of guys that put that doc together, but some people read those things as the gold standard when in reality:

 

Guidelines

Posted in Virtual SAN, VMware

AppVolumes: Asking the Tough Questions

I had some of these tough questions about AppVolumes when it first came out. I set out to answer these questions with some of my own testing in my lab. These questions are beyond the marketing and get down to core questions about the technology, ongoing management and integrations. Unfortunately I can’t answer all of them, but these are some of my observations after working in the lab with it and engaging some peers. Much thanks to @Chrisdhalstead for the peer review on this one. Feedback welcome!
Q. Will AppVolumes work running on a VMware Virtual SAN?
A. Yes! My lab is running entirely on the Virtual SAN datastore and AppVolumes has been working great for me on it.
Q. Can I replicate and protect the AppStacks and Writable volumes with vSphere Replication or vSphere Data Protection, or is array based backup/replication the only option?
A. At the time of writing I’m pretty sure the only way to do this is to replicate the AppStacks is with array based replication. Of course, you could create a VM that serves storage to the hosts and replicate that, but that’s not an ideal solution for production. Hopefully more information will come out on this in the near future. This means that currently, there’s no automated supported method that I’m aware of to replicate the volumes when Virtual SAN is used for the AppStacks and writable volumes. Much thanks to the gang on twitter that jumped in on this conversation: @eck79, @jshiplett, @kraytos, @joshcoen, @thombrown
Q. So let’s talk about the DR with AppVolumes. Suppose I replicate the LUNs with array based replication that has my AppStacks and Writable Volumes on it. What then?
A. First of all, this is a big part of the conversation, and from what I hear there are some white papers brewing for this exact topic. I will update this section when they come out. My first thought is for environments that have the highest requirements for uptime which are active/active multi-pod deployments. My gut reaction says that while you can replicate, scan and import AppStacks for the sake of keeping App updating procedures relatively simple, I’m not sure how well that’s going to work with writable volumes when there is a user persona with actively used writable volumes in one datacenter. The same problem exists for DFS, you can only have a user connect from one side at the time, (connecting from different sides simultaneously with profiles is not supported) so we generally need to give users a home base. However, DFS is an automated failover in that replication is constantly synching the user data to the other site and is ready to go. So for that use case, first determine if profile replication is a requirement or if during a failure it’s OK for users to be without profile data. If keeping the data is a requirement, I’d say you should probably utilize a UEM with replicating shares with DFS or something similar. For active/warm standby, the advice is the same. For active/standby models, you’ll have to decide if you’re going to replicate your entire VDI infrastructure and your AppVolumes, or if you’re going to set up infrastructure on both sides.
Q. But after replication how will AppVolumes know what writable volumes belong to what user? Or what apps are in the AppStacks?
A. That information is located in the metadata. It should get picked up when re-scanning the datastore from the Administrative console. I’m sure there’s more to it than that, but I don’t have the means to test a physical LUN replication at the moment. Below is an example of what is inside the metadata files. This is what’s picked up when the datastore is scanned via the admin manager.
appvol1
appvol2
appvol3
appvol4
appvol5
Q. What about backups? How can I make sure my AppStacks are backed up?
A. Backing up the SQL database is pretty easy and definitely recommended. To back up the AppStacks, that means that you should consider and evaluate a backup product that will allow for granular backups down to the virtual disk file level on the datastore. This doesn’t include VDP. Otherwise you’re going to have to download the entire directory the old fashioned way, or scp the directory to a backup location on a scheduled basis.
Q. In regards to application application conflicts between AppStacks and Writable Volumes, who wins when different versions of the same app are installed? How can I control it?
A. My understanding is that the last volume that was mounted wins. In the event a user has a writable volume, that is always mounted last. By default, they are mounted in order of assignment according to the activity log in my lab. If there is a conflict between AppStacks, you can change the order in which they are presented. However, you can only change the order for all AppStacks that are assigned to the same group or user. For example, if you have 3 AppStacks and each of them are assigned to 3 different AD groups, there’s no way to control the order of assignment. If 3 AppStacks that are all assigned to the same AD group, you can override the order in which they are presented under the properties of the group.
Q. What happens if I Update an AppStack with an App that a User already installed in their writable volume? For example, if a user installed Firefox 12 in their writable volume, then the base AppStack was updated to include Firefox 23, what happens when the AppStack is updated?
A. My experience in the lab shows that when a writable volume is in play for a user, anything installed there that conflicts will trump what is installed in an AppStack. Also all file associations regardless of applications installed in the underlying AppStacks are trumped by what has been configured in the user profile and persisted in the writable volume. Also, my test showed that if you remove the application from the writable volume and refresh the desktop, the file associations will not change to what is in the base AppStack.
appvol6
After the update:
appvol7
Files in the C:\Program Files\ Directory were still version 12.
appvol8
Q. What determines what is captured via UIA on a user’s writable volume and what is left on the machine?
A. Again, another thing I’m not sure of. I know that there must be some trigger or filter but it is not configurable. I was able to test and verify that all of these files are kept:
     All Files in the users’ profile
     Wallpapers
     Printers
     AppData
     HKCU Registry Settings
     Windows Explorer View Options such as File Extensions, hidden folders
Q. Can updates be performed to the App Stacks without disruption of the users that have the App Stack in use?
A. Yes. While it is true that you must unassign the old version of the AppVolume first, you can choose to wait until next logon/logoff. Then you can assign the new one and also choose to wait until the next logon/logoff. There is a brief moment after you unassign the AppStack that a user might log in and not get their apps, so off hours is probably a good time to do this in prod unless you’re really fast on the mouse.
appvol9
Q. If one AppStack is associated to one person, what about people that need access to multiple pools or RDSH apps simultaneously?
A. A user can have multiple AppStacks that are used on different machines. You can break it down so they are only attached on computers that start with a certain hostname prefix and OS type.
appvol10
Q. So if an RDSH computer has an AppStack assigned to the computer, can you use writable volumes for the user profile data?
A. No. It is not supported to have any user based AppStack or writable volumes assigned with a computer based AppStack is assigned to the machine.
Q. So can I still use AppVolumes to present apps with RDSH?
A. Sure you can, it just has to be machine based assignments only; you just can’t use user based assignments or writable volumes at the same time. While we’re on the topic there’s no reason you couldn’t present AppStacks to XenApp farms too, it’s what CloudVolumes originally did. My recommendation is to use a third party UEM product for profiles when utilizing RDSH app delivery, trust me that it’s something you’re going to want granular control over.
Q. What about role based access controls for the AppVolumes management interface? There’s just an administrator group?
A. Correct, there’s no RBAC with the GA release.
Q. Does this integrate with other profile tool solutions?
A. Yes. I’ve used Liquidware Labs Profile Unity to enable automatic privilege elevation for installers located on specified shares or local folders. This way can install their own applications (providing they’re located on a specified share) which will be placed in the writable volume, or they can one-off update apps that have frequent updates without IT intervention right in the writable volume.
Posted in Uncategorized
  • Who wants a snow cone! The flavor of the month is city sadness. http://t.co/bRRc4xk7Zd 1 hour ago
  • Mobile phone voicemail greeting: "Press 4 to send a fax". As in, instead of leaving a voicemail? What possible use case is there for this? 8 hours ago
  • RT @jheese91: I never wanted to be one of those people who needs coffee to function. Oh well. 8 hours ago
Papers
People
Map

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 13 other followers

Follow

Get every new post delivered to your Inbox.