StoragevHerseyVMware

Nested vSphere 6 VSAN Lab

Here is a quick walk through on creating a Nested vSphere 6 VSAN Lab. The lab requires three nested ESXi 6.0 hosts each configured with three hard disks and a vCenter Server 6.0 (Windows or VCSA). This post covers creating the nested ESXi hosts and configuring a VSAN cluster.

VSAN-picture

I built out the lab manually but the AutoLab Kit includes all the shell VMs you need to build out a nested VSAN lab on vSphere 5.5 (6 support will likely be coming soon). If you don’t have a home lab but still want to get some lab time working with VMware VSAN check out the VMware Hands-on Lab HOL-SDC-1408 – What’s New with Virtual SAN 6.

This nested VSAN lab was created on a single socket 4 core ESXi 6.0 host with 32 GB of memory. The nested lab environment has 26 GB of memory allocated ( 6 GB x 3 for the ESXi hosts and 8 GB for the VCSA) but only consumes approximately 9 GB of memory (TPS is enabled).

Now on to setting up a nested VSAN lab…

First create 3 VMs to be ESXi hosts. When creating each VM select Other for the Guest OS Family and select VMware ESXi 6.x for the guest OS version.
OS-ESXi6

On the Customize Hardware step of the new virtual machine wizard, allocate 2 vCPUs and 6 GB of Memory. Expand the CPU hardware to select Expose hardware assisted virtualization to guest OS.
esxpose-hw-virt

Allocate 10 GB for the first hard disk, this will be used to install ESXi and the remaining space will become a local datastore on the each of the hosts. Then add two new virtual hard disks so the VM has 3 virtual disk total.
add-hd

Allocate 10 GB to the second disk.  This disk will be marked as a Flash disk (later on in the process) to be used by VSAN. Allocate 100 GB for the final disk to be used for VSAN capacity. These disk can all be Thin provisioned.
ESXi-with-3-HDs

Once the 3 VMs are created. Attach the ESXi ISO and install ESXi 6.0 on the 3 VMs to the first 10 GB disk. When the install completes configure the management network on each of the hosts.

In vCenter create a new Cluster but do not enable VSAN yet. Add the nested ESXi hosts to the new cluster. VSAN requires a VMKernel on each host configured to be used for VSAN traffic. Create a VMKernel on each host with VSAN Traffic enabled or enable VSAN traffic on the management VMkernel.
enablevsantraffic

VSAN requires that each host providing storage to the VSAN cluster have at least one SSD. In a physical environment supported SSDs would be recognized and flagged as Flash disk. In the nested environment the Flash flag/tag needs to be manually set on the second 10GB drive for each host. For each host Manage Storage and select Storage Devices. Highlight the second 10 GB disk and click the Mark the selected disk as flash disks icon to tag the disk as a Flash disk.
markdiskasflash

Once the VMKernel has been configured for VSAN traffic and the 10 GB disk has been tagged as Flash on each of the hosts; edit the cluster settings to Turn on VSAN. Leave the Add disks to storage set to Manual.
turnon-vsan
If Add disk to storage set to Automatic, the VSAN will automatically claim any unused disk attached to host in the cluster. This will be fine if everything has been configured correctly. Leaving this set to Manual and then manually claiming the disk allows you to verify the hosts and disks have been configured as expected and also allows you to walk through the process (kind of the purpose of working through a lab).

Manage the cluster settings and select Disk Management from the VSAN menu and click the Claim disks icon.
claimdisk

If the VMKernel and disks are configured correctly all hosts should show up in the Claim Disks for Virtual SAN Use window. Each host should have a Flash disk and a HDD disk. Click the Select All button to claim all the disks to be used by VSAN.
select-disk

Once the disks have been claimed each host will be providing a disk group to the VSAN cluster.
vsan-done

The VSAN datastore is ready and the storage can be consumed by hosts in the VSAN Cluster.
vsan-datastore

As I mentioned I built this lab out manual and it took approximately 2 hours from start to finish. So, what next? Here are a few lab ideas:

  • Deploy some VMs on the VSAN datastore.
  • Get familiar with the VSAN VM Storage Policies.
  • Fail a node to see how the VSAN responds to a failure.
  • Add additional storage to the VSAN.
  • Create a hosts which consumes VSAN storage but does not provide any disk groups to the VSAN

Have fun!

Comments and questions welcome.

Leave a Reply

Your email address will not be published. Required fields are marked *

5 × 1 =