Nobody cares what kind of undershirt you’re wearing. (Or what your hypervisor is)

mrdavis underftt nanodry champion hanesfruit

When was the last time you judged somebody because of the brand and type of undershirt they were wearing? Likely never. You may have judged someone for NOT wearing an undershirt (ugh), but that’s a different story. Nobody cares what kind or brand of undershirt you’re wearing. Except for one person. You. Why is that? What do you consider when you’re at the store and need fresh undershirts?

You probably care about several things:

  • YOU want to be comfortable. This is the primary reason you stick with an undershirt brand or type, typically regardless of cost.
  • YOU want to look professional. Wear an undershirt. My fellow dudes, wear one.
  • YOU have to pay money for these shirts, so they can’t be ridiculously over priced compared to alternatives.
  • YOU do not want OTHERS to perceive you as bad-smelling.

One of these bullet points has something to do with others. And it’s the last one. Arguably, it’s the single most important reason to wear an undershirt.

When it comes to your choice of hypervisor:

  • YOU want to be comfortable with the technology choice
  • YOU have to pay for your platform (sometimes the software, at the very least the hardware and support)
  • YOU do not want OTHERS to perceive you as providing a bad smelling platform and application experience

The point I’m trying to make here is, your personal preferences for Xen, KVM, vSphere, OpenStack, Hyper-V are just that. Your preferences, battle scars and stories are all about you and your perspective. You also care about others understanding the success you’ve had and enjoyed with solutions in the past.

If you’re a Linux guy with a beard to your belt, you might be more apt to support a KVM environment simply because you can make it do what the others can for less capital software investment, and because you can operate it immediately with less training. Alternatively, if you’ve traditionally been a Windows administrator, perhaps you’re more apt to pay for vSphere, because of familiarity and the ease of implementation and integration. You might be all in on the Microsoft stack and Hyper-V it makes sense given severe cost restrictions and lack of Linux administration experience.

What you prefer, may simply not be the best fit for what you are trying to accomplish, but not for the reason you might think.  When looking at TCO, you have to look at everything involved in a solution, including the salaries for human resources that can run it, cost of hardware, training, support and software.

All the personal preferences, feature capabilities, doesn’t mean anything if you as an IT professional at the end of the day you can’t provide:

  • Availability – High uptimes for your platform and application
  • Manageability – Your people can own the solution stack. Also, you can easily provide access to consumers.
  • Performance – You can size your platform to fit your workload with excellence
  • Recoverability – You can recover your solution after accidental deletion or destruction
  • Security – Your solution has an update lifecycle and can meet the business security requirements.
  • Cost – The solution must operate within the financial constraints.

You can build an on premises cloud today with a standardized HP cloud type server with a standard Ubuntu build and Docker. You can build it with Nutanix and KVM or vSphere. You can build it with vSphere and VxRAIL or VxRACK. Why do you ultimately pick up one over the other? Mostly the same reason you pick an undershirt from the aisle.

If it meets the requirements, you won’t smell bad, and it doesn’t break the bank, it sounds like you’ve got a winner. But just like I don’t care what kind of undershirt you’re wearing, I guarantee you that your end users don’t care what kind of OS on a hypervisor you’re running. They care how well you run their applications. So if your platform choice is ultimately impacting the business or causing them to perceive you in a negative light to due lack of availability, high time to value with capital expenses, or the inability to recover lost information, you might be looking at an indicator that you need to check the market. If not, keep your eye on the cost meter and keep on truckin’.

**Disclaimer: While the illustration used in this post does cater more specifically to men, it is not meant to be gender inclusive. There are tons of excellent IT professionals who are women, to whom this article applies to equally.

Posted in Uncategorized

PIVOT!

pivot

I ask people these three questions when they are considering new opportunities for their career:

  1. Is it doing what you love?
  2. More time at your home?
  3. Is it better for you financially?

The past few weeks have been pretty tumultuous at the elgwhoppo house. When your current employer wants to keep you, it’s a usually good thing, because typically you’ve shown value and are a hard worker. When they “go to the mattresses” to keep you, it’s usually an excellent thing, because they see you as a critical piece of the organization’s culture and success. As a result, I’ve been wrestling with those aforementioned three questions over the past few weeks.

The comments about innovation I made in a prior (since removed) blog post remain true; channel partner organizations must innovate and show value on the front end of business by identifying disruptive and profitable technologies and aligning resources and strategy in order to be successful. As I’ve moved through the conversations these past few weeks, it was apparent that I had a massive choice to make. Either I could up and walk from the challenges of front end innovation in the channel and keep myself headlong on the technical boots on the ground path and eventually pivot into management, or I could stay and pivot now to be a part of the team that’s on the bridge of the ship, helping provide direction for the ship course. My decision speaks for itself, as I am now a Principal Architect at Rolta AdvizeX.

In my new role, I will be architecting solutions beyond the marketing to help customers work through the real challenges that come with Hybrid Cloud and Digital Workspace solutions. I am excited to drive new innovation and continue to provide significant value at AdvizeX. From here on out, I start to lean more into people and process to accomplish objectives, rather than mostly technology. It’s a bit of a scary change, but hey, if you never move away from the things you’re comfortable with, you will never grow. Here’s to never stopping the growing and learning.

 

 

 

 

Posted in Uncategorized

Sharks and Security Tech

So I saw this news story over the weekend and honestly after watching the video, the first thing I thought of was firewalls. Kind of sad that this is the first analogy I drew from this, but that’s OK. It goes with the turf.

shark.png

So often in IT we bet on a technology to keep us secure, whether that be next generation firewalls, process whitelisting, anti-malware, network segmentation, and all of these are good things, no security consultant should ever argue against the concept of defense in depth. The problem is that we can end up shifting our thinking once we have these technologies in place that we are “good”, or that we’re “safe”.

Here are some examples of those thoughts:

  • “We have new next generation firewalls, nothing can get us now”
  • “We segmented off all our DB servers, we should be good now”
  • “We have no admin on any user boxes and antimalware everywhere”

I’m wondering how many cages that diver is going to trust after this terrifying diving experience. I bet netsec employees that have suffered a serious breach probably feel the same way about some of these technologies.

The point I’m trying to make with this post is that I never would have thought something like this could happen; those cages were ENGINEERED to withstand any kind of shark coming at it.

Understanding how you perceive technologies that you are a fan of is extremely important. Always understand your personal status quo and your thoughts and feelings around how you’ve been successful with a particular product. Don’t let success modify your thinking that a particular solution is impossible to break or breach, because that trust can very easily turn to complacency. Just because it works and reduces threats doesn’t make you bullet proof.

On an updated note, I’m updating my bucket list and removing an item.

 

 

Posted in Uncategorized

The Digital Workspace – I Fight For the Users

The Digital Frontier Workspace
tron_1982_movie_poster_01

In the 1981 version of the movie TRON, there is a phrase that TRON says and sort of defines his purpose. The phrase he uses is “I Fight for the Users”.

Having become a VDI/Presentation specialist over the past several years, I’m pretty sure I’ve felt this way several times. At the end of the day, our job as IT employees is to provide value to the business. Below is a quick list of things I’ve done in the past that felt like I had to “Fight for the users”

  • Proved the need for more compute resource for overly dense environments
  • Rationalize the upgrade of underperforming storage
  • Justified the value of graphics capabilities inside a remote session
  • Worked with IT departments to enable faster and less disruptive delivery of updates and net new applications
  • Troubleshot performance, network and software issues that were impeding progress of application delivery

The transformation of “remote access” to “Digital Workspace” has been a very long time coming, and for the most part the concept isn’t new. The new catch is that with the prevalence of SaaS becoming a normal delivery method of applications, this can add a great deal of complexity when attempting to deliver all the applications someone might need in a “one shop stop”. This was the message from the stage at VMworld this year. “Consumer Simple, Enterprise Secure” was the verbiage Sanjay Poonen used. He also used the term “Sesame Street Simple”, to describe the end user experience for signing on and accessing applications safely and securely, with a fast and simple BYOD enrollment process.

Here is where it gets fuzzy, the “Sesame Street Simple” concept in my opinion does not necessarily apply to the engineers and architects behind the solution. It’s the same when developing software, in order to make it simple for the end user it requires a very high amount of complexity on the backend to make it work with as simple of a workflow as possible. We can simplify on portions of the tasks such as the storage and compute expansion with the prevalence of HCI, but overall, providing a one shop stop to on-prem and SaaS based apps with one sign-on action is a daunting task. It’s for this reason that I bring up the concept of fighting for the user, this is not an easy task. It’s a fight that we must constantly keep in mind, that we are in the business of enabling the business, and therefore the users, regardless of where the applications reside. I fight for the users, and sometimes that means against IT departments who are reducing their value proposition in a fast shifting SaaS landscape. If you cannot service the business with an acceptable SLAs or timely responses, shadow IT will creep in, and/or it will cause damage to the organization.

Myself, I will continue to propose solutions that ATTEMPT to make lives easier for both end users and the engineers, but the very nature of making it simple on one side of the equation will almost always make it more complex on the other.

In regards to Workspace One, I’m actually very excited to watch the evolution of A2, which will be delivering AppStacks directly to physical Windows 10 endpoints, likely via AirWatch. As a former veteran Microsoft SCCM (ConfigMgr) engineer, delivering applications and controls on Windows devices that aren’t connected to the corporate network isn’t a new concept, it’s been possible by establishing a PKI for quite some time, albeit complex. What is new is the concept that with the improvements that the Windows 10 operating system has brought about, it’s now possible to manage policy and applications on Windows 10 devices natively with Unified Endpoint Management software, which can provide sandboxing technology. This sandboxing and isolation is a BYOD game changer and when combined with application delivery and seamless experience between different operating systems and devices can make a huge difference and reduce the BS logon and auth time in-between actual work. It’s my personal opinion  that we’ll see a real uptick in uptick in UEM combined with Digital Workspace transformation the next 3 years.

Until next time my friends, I hope to see you all on the battlefield of fighting for the users.

Posted in Uncategorized

Horizon View 6.2 – Cannot Disable Connection Server – Failed to update Connection Server

Recently saw an error message when trying to disable the connection server via the admin UI, which reads:

Failed to update the Connection Server.

Images taken from a twitter post by Ronald Westerhout @rwesterhout81

Untitled

So it turns out from Ronald that unless you have both the SSL gateway and Blast gateway either checked OR unchecked the disable button won’t work. Not sure in what versions of View this problem exists, because I’m pretty much only disabling connection servers when there’s a problem or when doing an upgrade. Either it’s a bug, or a KB needs to be built for this error dialog. Until then, I hope this post finds you well fellow Horizon administrators!

Capture

 

Posted in Uncategorized

How To Reclaim ESXi VMFS storage with Ubuntu VMs

First of all, zero the space in the guest. You can do that easily with secure-delete. In the Ubuntu guest, this srm command will recursively delete all files and folders, quickly, verbosely, zero the old file and will only do it with one pass.

df -h
sudo apt-get install secure-delete
sudo srm -r -f -v -z -l -l /dir/to/clean/*
df -h

Next, simply vMotion the VM to another datastore with the thin option set. If that’s not an option for you, you’ll need to power off the VM.

In ESXi:

<Power off the VM>

cd /vmfs/volumes/to/your/vm/folder
vmkfstools --punchzero vmdk-to-shrink.vmdk
Posted in Uncategorized

Load Balancing Horizon View with NSX

This post will explain in detail how I was able to set up NSX load balancing in a test environment and was still able to meet the requirements I had for load balancing. The requirements I had for a load balancer for Horizon View were:

  1. As simple as possible
  2. Session performance is acceptable and scalable
  3. Ability to automatically fail nodes so a reconnect gets back in less than one minute
  4. Balance Horizon View SSL, PCoIP and HTML5 Blast session traffic appropriately
  5. The entire solution must be able to leverage a single public IP address for all communication

So there are to primary ways of constructing an NSX load balancer, inline or one-armed. These drawings were borrowed from a VMworld 2014 slide deck.

in-lineone-arm

In this particular example, we opted for one armed, which means the load balancer node will live on the same network segment as the security servers.

Let’s take a look at logically how this is set up in the test lab. You may notice that I used two ports for PCoIP, 4172 and 4173. This is because I wanted to keep it as simple as possible, and with the requirement of a single IP address, keeping the PCoIP listen port for each security server separate was the easiest option. I was able to load balance and persist both 443 and 8443 with NSX, but stepped around the balancing for PCoIP and simply passed it through. In production you can just use another public IP address if you want to keep PCoIP on 4172, then just NAT it to the corresponding security server. If you need more detail on the lab setup, you can check out my previous post which has more detail on the firewall rules used.

logical-loadbalancing

General finding notes about the setup before we get into screen shots:

  • The demonstrated configuration is for external connectivity only at present, since persistence matters most when tunneled.
  • SSL passthru is utilized rather than offloading.
  • The DFW is in play here as well, and all firewall rules are in place according to to required communication.
  • In order to configure the security server to listen on 4173, I had to follow the KB and create a reghack on the box, AND configure the tunnel properly in the View GUI.
  • In order to get SSL and BLAST working together from a single VIP, I ended up needing separate virtual servers, pools and service monitoring profiles that all used the same VIP and Application profile.
  • I ended up using ssl_sessionid as the persistence type for both SSL and BLAST, and ended up using IPHASH for the load balancing algorithm.
  • I wasn’t able to configure a monitor to correctly say that 8443 was UP, instead it was in WARN(DOWN) on the 8443 monitor. I opted to simply leverage 443 as the monitor for the blast VIP, so there could potentially be a missed failover if only the blast service was down.

Let’s take a look at the configuration screen shots of the View specific load balancer edge that I used.

NSX-LB2015-09-24 23_17_41

We opted for the simple configuraiton of SSL passthrough and SSL Session ID was used for persistence.

NSX-LB2015-09-24 23_18_01

I found you had to create a separate monitor for each pool for it to function properly, not really sure why.

NSX-LB2015-09-24 23_31_03

You should be able to use the /favicon.ico to determine health as we do with other load balancers, but the plain GET HTTPS worked for me and favicon didn’t.

NSX-LB2015-09-24 23_31_16 NSX-LB2015-09-24 23_31_25

** Important, notice that no ports were specified in the server  pool, only the monitor port. That was specified only in the virtual servers below.

NSX-LB2015-09-24 23_43_47 NSX-LB2015-09-24 23_43_59

NSX-LB2015-09-24 23_45_04

NSX-LB2015-09-24 23_45_13

NSX-LB2015-09-24 23_46_20 NSX-LB2015-09-24 23_46_41

Let’s take a look at the screen shots of the View security server configuration.

 


ss1

ss2

If you have to make a security server listen on another port, don’t forget to change it in the registry as well as in the secure gateway URL field. Also don’t forget to let it through the firewall.

ss3

And voila! We have a working configuration. Here’s a quick demonstration of the failover time, which can be completely tuned. Of note, we only tested Blast failover, but SSL and PCoIP work just fine as well. The best part is we were able to create a COMPLETELY separate isolated and firewalled security server without even leveraging a DMZ architecture.

If you’re looking for another AWESOME post on how the NSX load balancer is utilized, I learned more from this one by Roi Ben Haim than from the pubs. Great step by step setup that helped me big time.

Posted in Horizon View 6.0, NSX
Papers
People
Map

Enter your email address to follow this blog and receive notifications of new posts by email.

Join 28 other followers