VCP6vHerseyVITAVMware

VCP6-DCV Delta Study – Section 3 – Objective 3.3

This post covers Section 3, Configure and Administer Advanced vSphere Storage, Objective 3.3, Configure vSphere Storage Multi-pathing and Failover.

The vSphere Knowledge covered in this objective:

  • Configure/Manage Storage Load Balancing
  • Identify available Storage Load Balancing options
  • Identify available Storage Multi-pathing Policies
  • Identify features of Pluggable Storage Architecture (PSA)
  • Configure Storage Policies
  • Enable/Disable Virtual SAN Fault Domains

Objective 3.3 VMware Resources:


– Configure/Manage Storage Load Balancing
Load balancing is the process of spreading server I/O requests across all available SPs and their associated
host server paths.

Load balancing can be used to optimize IOPS, MBps, and Response times.

View Datastore Paths in the vSphere Storage Guide on page 190.

View and Manage Datastore Paths in the Web Client -> Storage -> Datastore -> Manage -> Settings -> Connectivity and Multipathing
connectivity-multipathing
From here you can view the path selection policy, view the paths, enable/disable paths, and edit multipathing multipathing policy for a datastore.

View Storage Device Paths in the vSphere Storage Guide on page 190.

View and Manage Storage Device Paths in the Web Client -> Host and Clusters -> Host -> Manage -> Storage -> Storage Devices
manage-device-paths
From here you can view the path selection policy, view the paths, enable/disable paths, and edit multipathing multipathing policy a storage device.

Setting a Path Selection Policy in the vSphere Storage Guide on page 190.

Changing the PSP for a storage device.
update-psp
The path selection policy can be set to Fixed, Most Recently Used (MRU), Round Robin (RR), or to an installed third party path selection policy.

NFS 4.1 supports multipathing @ChrisWahl has an excellent post on it here: VMware Embraces NFS 4.1, Supports Multipathing and Kerberos Authentication

These three knowledge topics for this objective overlap each other so I “condensed” them into single section.
– Identify available Storage Load Balancing options
– Identify available Storage Multi-pathing Policies
– Identify features of Pluggable Storage Architecture (PSA)
Managing Multiple Paths in the vSphere Storage Guide on page 184.

The Pluggable Storage Architecture (PSA) is a collection of Storage APIs.
psa
The PSA allows 3rd party software developers to design their own load balancing techniques and failover mechanisms for particular storage array, and insert their code directly into the ESXi storage I/O path.

PSA – Pluggable Storage Architecture
NMP – Native Multipathing Plug-In. The generic VMware multipathing module.
PSP – Path Selection Plug-In or Path Selection Policy.
SATP – Storage Array Type Plug-In or Storage Array Type Policy
MPP – Multipathing Plugins

VMkernel multipathing plug-in that ESXi provides by default is the VMware Native Multipathing PlugIn (NMP).

SATPs and PSP can be built in (Native) or can be provided by a 3rd party. Multiple third-party MPPs can run in parallel with the VMware NMP. Third-party MPPs replace the behavior of the NMP and take complete control of the path failover and the load-balancing operations for specified storage devices.

The SATP monitors the health of the physical paths to storage, reports changes in path state, and handles path failover for a given storage array.

The PSP handles the path selection for a given device.
Default NMP PSPs:

  • VMW_PSP_MRU – Most Recently Used (MRU)
    The host selects the path used most recently. If that path becomes unavailable the host selects another path, the host will not revert back to the original path if it becomes available again. MRU is the default policy for most active-passive storage devices.
    mru
  • VMW_PSP_FIXED – Fixed
    A uses a designated preferred path or the first working path if a preferred path has not be set. Fixed is the default policy for most active-active storage devices.
    fixed
  • VMW_PSP_RR – Round Robin
    The host rotates through all active paths when connecting to active-active arrays or through all available paths when connecting to active-active arrays.
    round-robin

When using Fixed path policy the preferred path is marked with an asterisk (*) in the preferred column.

The Round Robin PSP defaults to an IOPS limit of 1000 per path. 1000 I/O are sent down each path before switching to the next path. This can be adjusted with esxcli. Alternatively a bytes limit can be set on the paths when using RR.

If no SATP is assigned to the device by the claim rules, the default SATP for iSCSI or FC devices is
VMW_SATP_DEFAULT_AA. The default PSP is VMW_PSP_FIXED.

Path status:

  • Active – Path is available for IO. Paths currently used for IO are marked as Active (I/O)
  • Standby – Path can become available if an active path fails.
  • Disabled – Path has been disabled and cannot process IO.
  • Dead – Unable to connect to storage through this path.

Managing Storage Paths and Multipathing Plug-Ins in the vSphere Storage Guide on page 192.
The esxcli is used to mange PSA multipathing plug-ins and the storage paths assigned to them.

List Multipathing Claim Rules for the host:
esxcli storage core claimrule list

Display Multipating Modules:
esxcli storage core plugin list –plugin-class=MP
This will show the NMP and any third-party MPPs loaded.

Display SATPs for the host:
esxcli storage nmp satp list

Display NMP storage devices:
esxcli storage nmp device list

– Configure Storage Policies
Virtual Machine Storage Policies in the vSphere Storage Guide on page 225.
Storage policies to define the types and classes of storage requirements.
A Storage Policy contains a Storage Rule or a collection of Storage Rules

Storage Rule Types:

  • Rule Based on Storage-Specific Data Services
    Rules are based on data services that storage entities such as Virtual SAN and Virtual Volumes
    advertise.
  • Rule Based on Tags
    Rules based on tags reference datastore tags that you associate with specific datastores.

Creating a Tag
new-tag
A tag can be assigned to an existing category or a new category can be created.

Assign a tag to a Datastore
tag-a-datastore

Storage Policies configured/managed in the Web Client -> Home -> VM Storage Policies

Create a Rule Based on Storage-Specific Data Services
data-services-rule
Choose the data service and create rules from the data service capabilities.

Create a Rule Based on Tags
tag-rule
Select the Category and the Tags to assign to the rule.

A storage policy can include multiple rule sets. Storage-Specific Data Service rules and Tag based rules can be combined in the same storage policy.

View VMs and Virtual Disks and Storage Compatibility in the Web Client -> VM Storage Policies -> Storage Policy -> Monitor
monitor-policy

Monitor Storage Compliance for a virtual machine in the Web Client -> VMs and Templates -> Virtual Machine -> Monitor -> Policies
storage-policy-compliance

Provisioning a VM to compliant storage
Select the VM Storage Policy and compatible storage for the selected policy is displayed.
provision-vm-compliant

– Enable/Disable Virtual SAN Fault Domains
Designing and Sizing Virtual SAN Fault Domains in the Administering VMware Virtual SAN on page 29.

You can group Virtual SAN hosts that could potentially fail together by creating a fault domain and assigning one or more hosts to it. Failure of all hosts within a single fault domain is treated as one failure. If fault domains are specified, Virtual SAN will never put more than one replica of the same object in the same fault domain.

Virtual SAN requires a minimum of 2*n + 1 fault domains in the cluster.

For example – hosts in one rack in one fault domain, hosts in another rack are in another, and hosts in another rack in yet another. VSAN policy to tolerate a single host failures will keep the data across fault domains. A fault domain appears as a single host to VSAN.

Managing Fault Domains in Virtual SAN Clusters in the Administering VMware Virtual SAN on page 72.

Virtual SAN Fault Domains created and managed in the Web Client -> Hosts and Clusters -> Cluster -> Manage -> Settings -> Virtual SAN -> Fault Domains
create-fault-domain

  • Configure a minimum of three or more fault domains in the Virtual SAN cluster.
  • A host not added to any fault domain is considered to be its own single host fault domain.
  • You do not need to assign every Virtual SAN host to a fault domain.
  • It is recommended that you configure fault domains with uniform number of hosts.
  • When moved to another cluster, Virtual SAN hosts retain their fault domain assignments.
  • You can add any number of hosts to a fault domain. Each fault domain is considered to be one host.
  • Only hosts running vSphere 6 can be added to a fault domain.

More Section Objectives in the VCP6-DCV Delta Exam Study Guide Index

I hope you found this helpful. Feel free to add anything associated with this section using the comments below. Happy studying.

One thought on “VCP6-DCV Delta Study – Section 3 – Objective 3.3

  • Awesome reference, Helping me study for VCP6-DCV Beta,

    So happy to have this so soon.
    Please keep up the good work,

    Thanks,
    Simon.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

seventeen − 13 =