Sharks and Security Tech

So I saw this news story over the weekend and honestly after watching the video, the first thing I thought of was firewalls. Kind of sad that this is the first analogy I drew from this, but that’s OK. It goes with the turf.

shark.png

So often in IT we bet on a technology to keep us secure, whether that be next generation firewalls, process whitelisting, anti-malware, network segmentation, and all of these are good things, no security consultant should ever argue against the concept of defense in depth. The problem is that we can end up shifting our thinking once we have these technologies in place that we are “good”, or that we’re “safe”.

Here are some examples of those thoughts:

  • “We have new next generation firewalls, nothing can get us now”
  • “We segmented off all our DB servers, we should be good now”
  • “We have no admin on any user boxes and antimalware everywhere”

I’m wondering how many cages that diver is going to trust after this terrifying diving experience. I bet netsec employees that have suffered a serious breach probably feel the same way about some of these technologies.

The point I’m trying to make with this post is that I never would have thought something like this could happen; those cages were ENGINEERED to withstand any kind of shark coming at it.

Understanding how you perceive technologies that you are a fan of is extremely important. Always understand your personal status quo and your thoughts and feelings around how you’ve been successful with a particular product. Don’t let success modify your thinking that a particular solution is impossible to break or breach, because that trust can very easily turn to complacency. Just because it works and reduces threats doesn’t make you bullet proof.

On an updated note, I’m updating my bucket list and removing an item.

 

 

The Digital Workspace – I Fight For the Users

The Digital Frontier Workspace
tron_1982_movie_poster_01

In the 1981 version of the movie TRON, there is a phrase that TRON says and sort of defines his purpose. The phrase he uses is “I Fight for the Users”.

Having become a VDI/Presentation specialist over the past several years, I’m pretty sure I’ve felt this way several times. At the end of the day, our job as IT employees is to provide value to the business. Below is a quick list of things I’ve done in the past that felt like I had to “Fight for the users”

  • Proved the need for more compute resource for overly dense environments
  • Rationalize the upgrade of underperforming storage
  • Justified the value of graphics capabilities inside a remote session
  • Worked with IT departments to enable faster and less disruptive delivery of updates and net new applications
  • Troubleshot performance, network and software issues that were impeding progress of application delivery

The transformation of “remote access” to “Digital Workspace” has been a very long time coming, and for the most part the concept isn’t new. The new catch is that with the prevalence of SaaS becoming a normal delivery method of applications, this can add a great deal of complexity when attempting to deliver all the applications someone might need in a “one shop stop”. This was the message from the stage at VMworld this year. “Consumer Simple, Enterprise Secure” was the verbiage Sanjay Poonen used. He also used the term “Sesame Street Simple”, to describe the end user experience for signing on and accessing applications safely and securely, with a fast and simple BYOD enrollment process.

Here is where it gets fuzzy, the “Sesame Street Simple” concept in my opinion does not necessarily apply to the engineers and architects behind the solution. It’s the same when developing software, in order to make it simple for the end user it requires a very high amount of complexity on the backend to make it work with as simple of a workflow as possible. We can simplify on portions of the tasks such as the storage and compute expansion with the prevalence of HCI, but overall, providing a one shop stop to on-prem and SaaS based apps with one sign-on action is a daunting task. It’s for this reason that I bring up the concept of fighting for the user, this is not an easy task. It’s a fight that we must constantly keep in mind, that we are in the business of enabling the business, and therefore the users, regardless of where the applications reside. I fight for the users, and sometimes that means against IT departments who are reducing their value proposition in a fast shifting SaaS landscape. If you cannot service the business with an acceptable SLAs or timely responses, shadow IT will creep in, and/or it will cause damage to the organization.

Myself, I will continue to propose solutions that ATTEMPT to make lives easier for both end users and the engineers, but the very nature of making it simple on one side of the equation will almost always make it more complex on the other.

In regards to Workspace One, I’m actually very excited to watch the evolution of A2, which will be delivering AppStacks directly to physical Windows 10 endpoints, likely via AirWatch. As a former veteran Microsoft SCCM (ConfigMgr) engineer, delivering applications and controls on Windows devices that aren’t connected to the corporate network isn’t a new concept, it’s been possible by establishing a PKI for quite some time, albeit complex. What is new is the concept that with the improvements that the Windows 10 operating system has brought about, it’s now possible to manage policy and applications on Windows 10 devices natively with Unified Endpoint Management software, which can provide sandboxing technology. This sandboxing and isolation is a BYOD game changer and when combined with application delivery and seamless experience between different operating systems and devices can make a huge difference and reduce the BS logon and auth time in-between actual work. It’s my personal opinion  that we’ll see a real uptick in uptick in UEM combined with Digital Workspace transformation the next 3 years.

Until next time my friends, I hope to see you all on the battlefield of fighting for the users.

Horizon View 6.2 – Cannot Disable Connection Server – Failed to update Connection Server

Recently saw an error message when trying to disable the connection server via the admin UI, which reads:

Failed to update the Connection Server.

Images taken from a twitter post by Ronald Westerhout @rwesterhout81

Untitled

So it turns out from Ronald that unless you have both the SSL gateway and Blast gateway either checked OR unchecked the disable button won’t work. Not sure in what versions of View this problem exists, because I’m pretty much only disabling connection servers when there’s a problem or when doing an upgrade. Either it’s a bug, or a KB needs to be built for this error dialog. Until then, I hope this post finds you well fellow Horizon administrators!

Capture

 

How To Reclaim ESXi VMFS storage with Ubuntu VMs

First of all, zero the space in the guest. You can do that easily with secure-delete. In the Ubuntu guest, this srm command will recursively delete all files and folders, quickly, verbosely, zero the old file and will only do it with one pass.

df -h
sudo apt-get install secure-delete
sudo srm -r -f -v -z -l -l /dir/to/clean/*
df -h

Next, simply vMotion the VM to another datastore with the thin option set. If that’s not an option for you, you’ll need to power off the VM.

In ESXi:

<Power off the VM>

cd /vmfs/volumes/to/your/vm/folder
vmkfstools --punchzero vmdk-to-shrink.vmdk

Load Balancing Horizon View with NSX

This post will explain in detail how I was able to set up NSX load balancing in a test environment and was still able to meet the requirements I had for load balancing. The requirements I had for a load balancer for Horizon View were:

  1. As simple as possible
  2. Session performance is acceptable and scalable
  3. Ability to automatically fail nodes so a reconnect gets back in less than one minute
  4. Balance Horizon View SSL, PCoIP and HTML5 Blast session traffic appropriately
  5. The entire solution must be able to leverage a single public IP address for all communication

So there are to primary ways of constructing an NSX load balancer, inline or one-armed. These drawings were borrowed from a VMworld 2014 slide deck.

in-lineone-arm

In this particular example, we opted for one armed, which means the load balancer node will live on the same network segment as the security servers.

Let’s take a look at logically how this is set up in the test lab. You may notice that I used two ports for PCoIP, 4172 and 4173. This is because I wanted to keep it as simple as possible, and with the requirement of a single IP address, keeping the PCoIP listen port for each security server separate was the easiest option. I was able to load balance and persist both 443 and 8443 with NSX, but stepped around the balancing for PCoIP and simply passed it through. In production you can just use another public IP address if you want to keep PCoIP on 4172, then just NAT it to the corresponding security server. If you need more detail on the lab setup, you can check out my previous post which has more detail on the firewall rules used.

logical-loadbalancing

General finding notes about the setup before we get into screen shots:

  • The demonstrated configuration is for external connectivity only at present, since persistence matters most when tunneled.
  • SSL passthru is utilized rather than offloading.
  • The DFW is in play here as well, and all firewall rules are in place according to to required communication.
  • In order to configure the security server to listen on 4173, I had to follow the KB and create a reghack on the box, AND configure the tunnel properly in the View GUI.
  • In order to get SSL and BLAST working together from a single VIP, I ended up needing separate virtual servers, pools and service monitoring profiles that all used the same VIP and Application profile.
  • I ended up using ssl_sessionid as the persistence type for both SSL and BLAST, and ended up using IPHASH for the load balancing algorithm.
  • I wasn’t able to configure a monitor to correctly say that 8443 was UP, instead it was in WARN(DOWN) on the 8443 monitor. I opted to simply leverage 443 as the monitor for the blast VIP, so there could potentially be a missed failover if only the blast service was down.

Let’s take a look at the configuration screen shots of the View specific load balancer edge that I used.

NSX-LB2015-09-24 23_17_41

We opted for the simple configuraiton of SSL passthrough and SSL Session ID was used for persistence.

NSX-LB2015-09-24 23_18_01

I found you had to create a separate monitor for each pool for it to function properly, not really sure why.

NSX-LB2015-09-24 23_31_03

You should be able to use the /favicon.ico to determine health as we do with other load balancers, but the plain GET HTTPS worked for me and favicon didn’t.

NSX-LB2015-09-24 23_31_16 NSX-LB2015-09-24 23_31_25

** Important, notice that no ports were specified in the server  pool, only the monitor port. That was specified only in the virtual servers below.

NSX-LB2015-09-24 23_43_47 NSX-LB2015-09-24 23_43_59

NSX-LB2015-09-24 23_45_04

NSX-LB2015-09-24 23_45_13

NSX-LB2015-09-24 23_46_20 NSX-LB2015-09-24 23_46_41

Let’s take a look at the screen shots of the View security server configuration.

 


ss1

ss2

If you have to make a security server listen on another port, don’t forget to change it in the registry as well as in the secure gateway URL field. Also don’t forget to let it through the firewall.

ss3

And voila! We have a working configuration. Here’s a quick demonstration of the failover time, which can be completely tuned. Of note, we only tested Blast failover, but SSL and PCoIP work just fine as well. The best part is we were able to create a COMPLETELY separate isolated and firewalled security server without even leveraging a DMZ architecture.

If you’re looking for another AWESOME post on how the NSX load balancer is utilized, I learned more from this one by Roi Ben Haim than from the pubs. Great step by step setup that helped me big time.