There are a lot of blog posts on how to set up the cool new Virtual SAN, but not a lot on how to destroy the Virtual SAN object. Probably because it’s really simple. Never fear! In a shameless effort to drive hits from google, here is your step by step on how to delete the vsanDatastore.
Deleting the vsanDatastore means that all the data on the underlying disk groups will be nuked. Gone. Vamoosed. This means you must either migrate the VMs to separate available local storate or to another storage solution. You can delete a disk group from one of the nodes, then reformat the disks into VMFS volumes, but you will obviously lose fault tolerance when you move to that. But hey, you’re the one deleting the virtual SAN object, not me. You should know why you’re doing it and what you’re moving to. I had to do this because I needed to nuke and refresh my cluster on 5.5. U1 from the beta refresh.
1. Migrate all the VMs and templates elsewhere. Where that is, well that is your problem. You can verify if there are any objects remaining by examining the related datastore object here.
2. Turn off HA on the Virtual SAN enabled cluster. You’ll need this to be off to turn off the Virtual SAN feature.
3. Delete all the disk groups individually. We will get warned that we should put it in maintenance mode first if we’re performing maintenance, but since we’re nuking the sucker we don’t want to do that.
3. We can see the local disks become available in the add storage wizard for each host as we delete the disk groups.
4. Repeat the process until all the disk groups have been deleted.
5. We are now ready to turn off the VSAN Feature on the cluster object.
6. Click OK on the warning.
7. Re-enable HA on the cluster. All done!
So I finally got around to refreshing my VSAN after the 1.0 release and I was hit by the AHCI bug. I must say, I had forgotten how powerful it is to put the disk “close to home” with VSAN technology is and how truly simple it is to set up. The goal of this post is simply to provide you with the hardware that I utilized to make my little VSAN lab, which has ~11TB of capacity and 360GB of flash. I also ran two quick IOmeter runs and posted their output. And yes the graphic over there is cheesy. I have lots of children in my house so it somewhat explains my rationale.
I purchased 4 refurbished C1100 CSTY-24 servers spec’d as such:
(2) Xeon L5520 CPUs @ 2.27Ghz
72 GB Memory
(3) 2TB Hitachi 7200 RPM spinning disks per server
I purchased them from deepdiscountservers and they cost me about $900 USD each, which was a pretty good deal, even when comparing against a whitebox build. Of note, since these are refurbed OEM’d custom spec’d systems, Dell will not support them and good luck updating the BIOS on them. One of them had some bizarre fan issue where it would spin the fans all the way up at random even though the temp and workload were not changing, so I’m just dealing with that. If I had to buy again, I’m pretty sure I’d actually choose a different flavor of pizza box. May you have better fortune than I in the refurb land. Also of note, there is no RAID card in this server, just an AHCI controller, which is perfect for what we need to do with VSAN. Assuming you’re on the Beta refresh. : )
I also purchased (3) Kingston SSDNow V300 120GB SATA disks from Microcenter for a cool $70 each.
So far with the VSAN beta refresh, everything has been great! Two base runs of iometer on a Windows 7 VM came out as such. I used http://vmktree.org/iometer/ to format the output nice and quick. I will say this, not bad considering it’s refurbed servers with 7200 RPM disks and non-certified MLC flash! Not bad for a basement project, not bad at all. Stay thirsty my friends.
VM Policy: 1 Failure
VM storage Policy: 1 Failure, 1 Stripe Per Disk, 20% Read Cache