vSphere 6.5 Storage – What’s new

This VMworld EU 2016 VMware announced the long awaited vSphere 6.5. This blogpost focuses on the new and enhanced storage features in vSphere 6.5.

VMFS-6

A new version of the VMFS file system is introduced providing an all-round performance improvement including faster file creation, device discovery and device rescanning. Maybe the biggest change is that VMFS-6 is 4K aligned which will allow the support of 4K drives when they become supported.

There will be no upgrade path to VMFS-6 because of the amount of on-disk changes. Moving to VMFS-6 should be considered a migration using Storage vMotion.

Limits Increase

There are two major limit increases in vSphere 6.5. First off, ESXi hosts running version 6.5 can now support up to 2,000 paths in total. Second, ESXi hosts running version 6.5 can now support up to 512 devices. This is a 2x increase from previous versions of ESXi where the number of devices supported was limited to 256.

NFS 4.1 Improvements

The major improvement for NFS 4.1 is the support for hardware acceleration. This allows certain operations to be offloaded to the array. Other improvements are:

  • NFS 4.1 will now be fully supported with IPv6
  • NFS 4.1 Kerberos AES encryption support (AES256-CTS-HMAC-SHA1-96 and AES128-CTS-HMAC-SHA1-96)
  • NFS 4.1 Kerberos integrity checking support (SEC-KRB5I)

iSCSI Improvements

iSCSI routed conections

The first improvement is that iSCSI routed connections are now supported. Another improvement is that it is now possible to use different gateway settings per VMKernel interface. This implies that port binding can be used to reacht targets in different subnets.

UEFI boot

It is also now possible to use UEFI iSCSI boot. With this you can boot an ESXi host from a iSCSI LUN using UEFI settings in the host BIOS.

SIOC v2

Storage I/O Control will be policy driven via I/O Filters. This allows you to expand Storage Policies with SIOC settings such as Limits, Reservations and Shares. By default there will be 3 configuration options for these settings called Low, Normal and High. It is possible to customize the options to your likings.

SIOC Storage policy integration
SIOC Storage policy integration

 

In the initial release of SIOC v2 there will be no support for VSAN or VVOLs. SIOC v2 is only supported with virtual machines that run on VMFS or NFS backed datastores.

VSAN 6.5

VSAN 6.5 is included in vSphere 6.5 and adds a few new features and a different licensing setup.

iSCSI service

The VSAN iSCSI service allows you to create iSCSI targets and LUNs on top of the VSAN datastore. These LUNs are VSAN objects and have a Storage Policy assigned to them. This feature is targeted for physical workloads such as Microsoft Clustering with shared disks.  It is not intended for connecting to other vSphere Clusters. It is possible to create a maximum of 1024 LUNs and 128 iSCSI targets per cluster. The LUN capacity limit is 62TB.

2-Node Direct Connect

VSAN 2-Node Direct Connect allows you the create a VSAN ROBO configuration without a switch by simple connection the 2 hosts together with cross connect cables. This can make a huge difference in total cost of ownership because it is no longer required to purchase 10 Gbit switches to connect the hosts together.

VSAN 2-Node Direct Connect
VSAN 2-Node Direct Connect

 

Furthermore, for this type of configuration it is possible to tag a VMkernel interface for the witness traffic to separate this type of traffic.

Licenses

The different VSAN licenses have been changed and an All-Flash configuration is now possible with the VSAN standard license. This means all VSAN licenses now support an All-Flash configuration. If you want to use data services like deduplication, compression or erasure coding you still have to buy the VSAN Advanced license. For a quick overview of the different licensing options visit the VMware website at http://www.vmware.com/products/virtual-san.html

Hardware support

VSAN 6.5 also introduces support for 512e drives, which will enable larger capacities.

VVOLs 2.0

Array-based Replication

VVOLs 2.0 adds the support for array-based replication. Unlike traditional array-based replication like NetApp MetroCluster which replicates the entire datastore, VVol replication allows you to use fine grained control for virtual machine replication. This means you have the flexibility to replicate not all virtual machines but a group of or individual virtual machine(s).

VVol Array-based Replication
VVol Array-based Replication

DR API

vSphere 6.5 also offers public APIs for triggering DR operations as well PowerCLI cmdlets for administrator-level orchestration and automation.

  • Replication Discovery – VVol disaster recovery discovers the current replication relationships between two fault domains.
  • Sync Replication Group – Synchronizes the data between source and replica.
  • Test Failover – To ensure that the recovered workloads will be functional after a failover, administrators periodically run the test-failover workflow. After a test, administrators can optionally move the devices from test to production when ready for a real failover.
  • Disaster Recovery and Planned Migration – For planned migration, on-demand sync can be initiated at the recovery site.
  • Setting up protection after a DR event – After the recovery on a peer site, administrators can set protection in the reverse direction.

Oracle RAC

VVols is also now validated to support Oracle RAC workloads.

VASA 3.0

VASA 3.0 introduces a new concept called ‘line of service’. A line of service is a group of related capabilities with a specific purpose, such as inspection, compression, encryption, replication, caching, or persistence. Now in addition to configuring replication at the individual Storage Policy, it is possible to create a line of service for replication and assign it to multiple Storage Polices.

As an example, imagine you have 3 Storage Policies: Gold, Silver and Bronze. While these three categories have very different storage capabilities assigned, it is possible to manage the replication once with a line of service replication instead of configuring replication on each individual Storage Policy.

VSAN 6.2 UPGRADE ERROR

Update 30 March – VMware have released a KB article with steps how to solve this problem. VMware will be providing a permanent fix in due course. In the meantime they are providing a  script that will detect stranded objects, broken disk chains and objects with a CBT lock. Where possible, the script will then ask whether you want the issue to be fixed for you, align the objects, and bring everything to interim version 2.5. This will then allow the upgrade of the on-disk format to proceed to V3. All steps (as well as the script) can be found in the KB articles listed below.

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2144881

For a detailed description of the steps in the KB article Cormac Hogan has written a blog post.

http://cormachogan.com/2016/03/31/vsan-6-2-upgrade-failed-realign-objects/

With the announcement of GA of vSphere 6 Update 2 it was time to upgrade our lab environment to vSphere Update 2 and VSAN 6.2. Upgrading the PSC, VCSA and vSphere hosts went easily with no problems. What made me smile was that the VCSA upgrade 70% stuck GUI issue was fixed as well! Not only did VMware implement new features into Update 2 but also a lot off bug fixes were applied.

But now for the serious part, upgrading to VSAN 6.2. When you have updated your vCenter and vSphere hosts to Update 2 it is possible to upgrade your VSAN datastore. This is as easy as clicking one button. At least, it should be.. not for our lab environment 😦

The lab enviroment has been build when vSphere 6 GA was available and in the time after this it has been upgraded multiple times, used a lot and in occasion CPR had to be performed to get it back running.

All this made the automatic upgrade to VSAN 6.2 return with the error: “Failed to realign following VSAN objects”.

vsan_62_upgrade_failure

Luckily, after playing with the lab environment  I know my way around checking the current status of  VSAN. The health check plugin in the Web Client didn’t show any errors so I used the RVC on the vCenter Server Appliance to dig a little deeper.

I ran the  ‘vsan.check_state’ command the verify the status of the VSAN cluster. As shown in the picture below no immediate problems were found.

vsan_check_state

After this I decided to check the status of one of the objects presented in the error message with the ‘vsan.object_info’ command. The object presented in the error is a VMDK of the PSC. I random checked some other objects presented in the error and almost all of them where VMDK’s of VM’s on the VSAN datastore.

vsan_object_info

I knew it was possible to remove these objects from the VSAN datastore but since it were VMDK’s and for instance not folder or VM swap files this was not possible without breaking the VM. If you remove the object it will not be available anymore!

The objects that represented an VM swap file could be removed. I powered off the VM and logged on with SSH to a vSphere host in the VSAN cluster. On the host i used the ‘objtool’ command to remove the object.

esx_shell_remove_object.PNG

But after removing the objects I still had the problem that the remaining objects represented VMDK’s. Since we do not have any other storage in our lab environment then VSAN it was not possible to Storage vMotion alle these VM’s to different datastores. We use the default VSAN storage policy with an FTT configuration of ‘1’. With this configuration it is possible to survive 1 failure, eg disk group or host. With this in mind I decided to perform the upgrade to VSAN 6.2 manually.

When you create a new diskgroup it will automatically be created with the VSAN 6.2 on-disk format. Since I did not have any new disks I deciced to remove the disk groups one by one and recreate new disk groups with disks from the removed disk groups. Actually, this is the same as the automatic upgrade does!

When removing the disk group I decided to use the option to do a full data migration of the data on the disk group to another disk group. I think it should also be possible to skip this and let VSAN handle this as a disk group failure. The data will be resynchronised after the disk group has been removed, thus resulting in lower availability of the data for a period of time.

After a while I managed to upgrade all the disk groups to the new on-disk format and able to use the new VSAN 6.2 features. As it is a lab setup, we only have a hybrid VSAN config so not all new features are supported (deduplication, erasure coding).