vSphere 6.5 Storage – What’s new

This VMworld EU 2016 VMware announced the long awaited vSphere 6.5. This blogpost focuses on the new and enhanced storage features in vSphere 6.5.

VMFS-6

A new version of the VMFS file system is introduced providing an all-round performance improvement including faster file creation, device discovery and device rescanning. Maybe the biggest change is that VMFS-6 is 4K aligned which will allow the support of 4K drives when they become supported.

There will be no upgrade path to VMFS-6 because of the amount of on-disk changes. Moving to VMFS-6 should be considered a migration using Storage vMotion.

Limits Increase

There are two major limit increases in vSphere 6.5. First off, ESXi hosts running version 6.5 can now support up to 2,000 paths in total. Second, ESXi hosts running version 6.5 can now support up to 512 devices. This is a 2x increase from previous versions of ESXi where the number of devices supported was limited to 256.

NFS 4.1 Improvements

The major improvement for NFS 4.1 is the support for hardware acceleration. This allows certain operations to be offloaded to the array. Other improvements are:

  • NFS 4.1 will now be fully supported with IPv6
  • NFS 4.1 Kerberos AES encryption support (AES256-CTS-HMAC-SHA1-96 and AES128-CTS-HMAC-SHA1-96)
  • NFS 4.1 Kerberos integrity checking support (SEC-KRB5I)

iSCSI Improvements

iSCSI routed conections

The first improvement is that iSCSI routed connections are now supported. Another improvement is that it is now possible to use different gateway settings per VMKernel interface. This implies that port binding can be used to reacht targets in different subnets.

UEFI boot

It is also now possible to use UEFI iSCSI boot. With this you can boot an ESXi host from a iSCSI LUN using UEFI settings in the host BIOS.

SIOC v2

Storage I/O Control will be policy driven via I/O Filters. This allows you to expand Storage Policies with SIOC settings such as Limits, Reservations and Shares. By default there will be 3 configuration options for these settings called Low, Normal and High. It is possible to customize the options to your likings.

SIOC Storage policy integration
SIOC Storage policy integration

 

In the initial release of SIOC v2 there will be no support for VSAN or VVOLs. SIOC v2 is only supported with virtual machines that run on VMFS or NFS backed datastores.

VSAN 6.5

VSAN 6.5 is included in vSphere 6.5 and adds a few new features and a different licensing setup.

iSCSI service

The VSAN iSCSI service allows you to create iSCSI targets and LUNs on top of the VSAN datastore. These LUNs are VSAN objects and have a Storage Policy assigned to them. This feature is targeted for physical workloads such as Microsoft Clustering with shared disks.  It is not intended for connecting to other vSphere Clusters. It is possible to create a maximum of 1024 LUNs and 128 iSCSI targets per cluster. The LUN capacity limit is 62TB.

2-Node Direct Connect

VSAN 2-Node Direct Connect allows you the create a VSAN ROBO configuration without a switch by simple connection the 2 hosts together with cross connect cables. This can make a huge difference in total cost of ownership because it is no longer required to purchase 10 Gbit switches to connect the hosts together.

VSAN 2-Node Direct Connect
VSAN 2-Node Direct Connect

 

Furthermore, for this type of configuration it is possible to tag a VMkernel interface for the witness traffic to separate this type of traffic.

Licenses

The different VSAN licenses have been changed and an All-Flash configuration is now possible with the VSAN standard license. This means all VSAN licenses now support an All-Flash configuration. If you want to use data services like deduplication, compression or erasure coding you still have to buy the VSAN Advanced license. For a quick overview of the different licensing options visit the VMware website at http://www.vmware.com/products/virtual-san.html

Hardware support

VSAN 6.5 also introduces support for 512e drives, which will enable larger capacities.

VVOLs 2.0

Array-based Replication

VVOLs 2.0 adds the support for array-based replication. Unlike traditional array-based replication like NetApp MetroCluster which replicates the entire datastore, VVol replication allows you to use fine grained control for virtual machine replication. This means you have the flexibility to replicate not all virtual machines but a group of or individual virtual machine(s).

VVol Array-based Replication
VVol Array-based Replication

DR API

vSphere 6.5 also offers public APIs for triggering DR operations as well PowerCLI cmdlets for administrator-level orchestration and automation.

  • Replication Discovery – VVol disaster recovery discovers the current replication relationships between two fault domains.
  • Sync Replication Group – Synchronizes the data between source and replica.
  • Test Failover – To ensure that the recovered workloads will be functional after a failover, administrators periodically run the test-failover workflow. After a test, administrators can optionally move the devices from test to production when ready for a real failover.
  • Disaster Recovery and Planned Migration – For planned migration, on-demand sync can be initiated at the recovery site.
  • Setting up protection after a DR event – After the recovery on a peer site, administrators can set protection in the reverse direction.

Oracle RAC

VVols is also now validated to support Oracle RAC workloads.

VASA 3.0

VASA 3.0 introduces a new concept called ‘line of service’. A line of service is a group of related capabilities with a specific purpose, such as inspection, compression, encryption, replication, caching, or persistence. Now in addition to configuring replication at the individual Storage Policy, it is possible to create a line of service for replication and assign it to multiple Storage Polices.

As an example, imagine you have 3 Storage Policies: Gold, Silver and Bronze. While these three categories have very different storage capabilities assigned, it is possible to manage the replication once with a line of service replication instead of configuring replication on each individual Storage Policy.

Cross vCenter vMotion – Cannot connect to host

21 November – VMware engineering has given a fix for this issue. Results are posted at the end of this post.

17 October – Currently this issue is under investigation by VMware and the SR has been referred to VMware engineering. The workaround for this issue at this time is to not use the provision VMkernel interface and TCP/IP stack.

Part of a solution for a customer was the ability to perform migrations of VM’s between vCenter Servers. Furthermore, company policy dictates the management network is used only for management purposes.

By default, data for VM cold migration, cloning, and snapshots is transferred through the management network. This traffic is called provisioning traffic. On a host, you can dedicate a separate VMkernel interface to the provisioning traffic, for example, to isolate this traffic on another VLAN.

To comply with the policy the design used the provisioning VMkernel interface and TCP/IP stack to isolate the provisioning traffic to another VLAN.

When performing the validation of the design we ran into an issue with the Cross vCenter vMotion possibilities. If the VM was powered on the x-vCenter vMotion would perform successful but when the VM was powered off the x-vCenter vMotion would fail with the error ‘Cannot connect to host’.

web_client_error

Because the failure happened when the VM was powered off we immediately suspected the provisioning vmkernel interface and TCP/IP stack. We double checked the configuration and checked if the vmkernel interfaces could reach each other.

esxa01_vmkernel_config esxb01_vmkernel_config

esxa01_firewall esxa01_firewall_out

esxa01_ping_esxb01 esxb01_ping_esxa01

After this we examined the vpxa log on the source host and found connection errors between the provisioning vmkernels on the source and destination hosts.

esxa01_vpxa_log_error esxa01_vpxa_log_error_2

Because pinging between the vmkernel interfaces worked, we wanted to verify if the provisioning network packets reached the vmkernel interface. To verify this we used the pktcap-uw tool which is included by default in ESXi 5.5 and later versions. The pktcap-uw tool is an enhanced packet capture and analysis tool that can be used in place of the legacy tcpdump-uw tool.

With the pktcap-uw tool we generated receive and transmit traffic captures on the provisioning vmkernel interfaces on both the source and destination hosts. The capture files were analyzed with wireshark to verify if the provisioning network packets are exchanged between the hosts.

esxb01_pcap_receive

The picture above is taken from the receive packet capture on the destination host. As you can see packets are received from the IP address 192.168.13.10  on port 902. This is the IP address of the provisioning vmkernel interface on the source host and port 902 is used for the provisioning traffic (NFC).

Because traffic was flowing between the vmkernel interfaces we checked if the NFC service is listening to accept connections on the provisioning vmkernel interfaces.

esxa01_ssh_esxb01_prov esxb01_ssh_esxa01_prov

The NFC service was not listening on the provisioning vmkernel interfaces on both hosts. To verify if the NFC service was listening on the host we performed the same test on the management vmkernel interfaces.

esxa01_ssh_esxb01_mgmt esxb01_ssh_esxa01_mgmt

This time the test was successful. It looks like the NFC service only responds to incoming connections on the management vmkernel interfaces. To further investigate this issue we opened a SR at VMware.

The SR was transfered to VMware engineering and we had to wait a very long time.

The fix VMware engineering provided us was to increase the maximum memory that could be used by the NFC process on the vSphere hosts. These commands where:

“grpID=$(vsish -e set /sched/groupPathNameToID host vim vmvisor nfcd | cut -d’ ‘ -f 1)”
“vsish -e set /sched/groups/$grpID/memAllocationInMB max=16”

And to check the result:

“vsish -e get /sched/groups/$grpID/memAllocationInMB”

esxi_commands

After configuring the hosts with this, I performed a new x-vCenter vMotion action and this time at was successful. We decided to do a few more between different hosts and all of these were successful.

vmotion_success.PNG

Many thanks to the VMware support representative for keeping the SR open and giving us updates on the progress!