vSphere 6.5 Storage – What’s new

This VMworld EU 2016 VMware announced the long awaited vSphere 6.5. This blogpost focuses on the new and enhanced storage features in vSphere 6.5.

VMFS-6

A new version of the VMFS file system is introduced providing an all-round performance improvement including faster file creation, device discovery and device rescanning. Maybe the biggest change is that VMFS-6 is 4K aligned which will allow the support of 4K drives when they become supported.

There will be no upgrade path to VMFS-6 because of the amount of on-disk changes. Moving to VMFS-6 should be considered a migration using Storage vMotion.

Limits Increase

There are two major limit increases in vSphere 6.5. First off, ESXi hosts running version 6.5 can now support up to 2,000 paths in total. Second, ESXi hosts running version 6.5 can now support up to 512 devices. This is a 2x increase from previous versions of ESXi where the number of devices supported was limited to 256.

NFS 4.1 Improvements

The major improvement for NFS 4.1 is the support for hardware acceleration. This allows certain operations to be offloaded to the array. Other improvements are:

  • NFS 4.1 will now be fully supported with IPv6
  • NFS 4.1 Kerberos AES encryption support (AES256-CTS-HMAC-SHA1-96 and AES128-CTS-HMAC-SHA1-96)
  • NFS 4.1 Kerberos integrity checking support (SEC-KRB5I)

iSCSI Improvements

iSCSI routed conections

The first improvement is that iSCSI routed connections are now supported. Another improvement is that it is now possible to use different gateway settings per VMKernel interface. This implies that port binding can be used to reacht targets in different subnets.

UEFI boot

It is also now possible to use UEFI iSCSI boot. With this you can boot an ESXi host from a iSCSI LUN using UEFI settings in the host BIOS.

SIOC v2

Storage I/O Control will be policy driven via I/O Filters. This allows you to expand Storage Policies with SIOC settings such as Limits, Reservations and Shares. By default there will be 3 configuration options for these settings called Low, Normal and High. It is possible to customize the options to your likings.

SIOC Storage policy integration
SIOC Storage policy integration

 

In the initial release of SIOC v2 there will be no support for VSAN or VVOLs. SIOC v2 is only supported with virtual machines that run on VMFS or NFS backed datastores.

VSAN 6.5

VSAN 6.5 is included in vSphere 6.5 and adds a few new features and a different licensing setup.

iSCSI service

The VSAN iSCSI service allows you to create iSCSI targets and LUNs on top of the VSAN datastore. These LUNs are VSAN objects and have a Storage Policy assigned to them. This feature is targeted for physical workloads such as Microsoft Clustering with shared disks.  It is not intended for connecting to other vSphere Clusters. It is possible to create a maximum of 1024 LUNs and 128 iSCSI targets per cluster. The LUN capacity limit is 62TB.

2-Node Direct Connect

VSAN 2-Node Direct Connect allows you the create a VSAN ROBO configuration without a switch by simple connection the 2 hosts together with cross connect cables. This can make a huge difference in total cost of ownership because it is no longer required to purchase 10 Gbit switches to connect the hosts together.

VSAN 2-Node Direct Connect
VSAN 2-Node Direct Connect

 

Furthermore, for this type of configuration it is possible to tag a VMkernel interface for the witness traffic to separate this type of traffic.

Licenses

The different VSAN licenses have been changed and an All-Flash configuration is now possible with the VSAN standard license. This means all VSAN licenses now support an All-Flash configuration. If you want to use data services like deduplication, compression or erasure coding you still have to buy the VSAN Advanced license. For a quick overview of the different licensing options visit the VMware website at http://www.vmware.com/products/virtual-san.html

Hardware support

VSAN 6.5 also introduces support for 512e drives, which will enable larger capacities.

VVOLs 2.0

Array-based Replication

VVOLs 2.0 adds the support for array-based replication. Unlike traditional array-based replication like NetApp MetroCluster which replicates the entire datastore, VVol replication allows you to use fine grained control for virtual machine replication. This means you have the flexibility to replicate not all virtual machines but a group of or individual virtual machine(s).

VVol Array-based Replication
VVol Array-based Replication

DR API

vSphere 6.5 also offers public APIs for triggering DR operations as well PowerCLI cmdlets for administrator-level orchestration and automation.

  • Replication Discovery – VVol disaster recovery discovers the current replication relationships between two fault domains.
  • Sync Replication Group – Synchronizes the data between source and replica.
  • Test Failover – To ensure that the recovered workloads will be functional after a failover, administrators periodically run the test-failover workflow. After a test, administrators can optionally move the devices from test to production when ready for a real failover.
  • Disaster Recovery and Planned Migration – For planned migration, on-demand sync can be initiated at the recovery site.
  • Setting up protection after a DR event – After the recovery on a peer site, administrators can set protection in the reverse direction.

Oracle RAC

VVols is also now validated to support Oracle RAC workloads.

VASA 3.0

VASA 3.0 introduces a new concept called ‘line of service’. A line of service is a group of related capabilities with a specific purpose, such as inspection, compression, encryption, replication, caching, or persistence. Now in addition to configuring replication at the individual Storage Policy, it is possible to create a line of service for replication and assign it to multiple Storage Polices.

As an example, imagine you have 3 Storage Policies: Gold, Silver and Bronze. While these three categories have very different storage capabilities assigned, it is possible to manage the replication once with a line of service replication instead of configuring replication on each individual Storage Policy.