9 Min. Read

Warning! This article is written for laboratory and demo purposes, it is not supported in production.

Happy New Year people!

In Windows Server 2016, Microsoft introduced a new type of storage called Storage Spaces Direct (aka S2D). S2D enables the construction of highly accessible storage systems on locally mounted disks without a separate SAS fabric, such as shared JBODs or enclosures. This is the first true software-specific storage (SDS) from Microsoft. Software-based storage is a concept that involves storing data without separate hardware.

In Windows Server 2019, Microsoft added a lot of improvements to Storage Spaces Direct. One of these improvements is a new kind of flexibility known as nested flexibility which is designed for a cluster of only two servers and targeted at branch offices and small deployments.

With nested flexibility, we can withstand multiple failures simultaneously (we can lose one server and one drive or two drives) compared to a traditional bidirectional mirror (one server or one station). There are also two options for nested flexibility:

  • Nested bidirectional mirror: This option is like a four-way mirror, and there are two copies on each server. The capacity efficiency is ~ 25% compared to 50% for a traditional two-way mirror.
  • Nested mirror-accelerated parity: Connect a nested bidirectional mirror to the nested parity. The capacity efficiency is about ~ 35% to 40%.

In this article, we’ll walk you through how to enable storage modes in the Hyper-Converged model nested bidirectional mirror In Windows Server 2019 and finally test the performance and compare it previous deployment with Classic Bidirectional Mirror in Windows Server 2016.

2 X 8th generation HPE ProLiant micro servers, each system has the following specifications:

  • 1 SATA SSD 2.5 ”512GB – Model: SanDisk SD7SB2Q-512G-1006 (OSDisk)
  • 2 SATA SSD 2.5 ”1TB model: Samsung SSD 840 EVO 1TB (S2D capacity storage)
  • 2 SATA SSD 2.5 ”1TB model: Samsung SSD 860 EVO 1TB (S2D capacity storage)
  • 2 NIC 1GB Ethernet model: Broadcom NetXtreme Gigabit Ethernet
  • 1 NIC 1GB Ethernet model: Intel 82574L Gigabit network connection
  • 2 DDR3 8 GB RAM 1600 MHz – 16 GB total memory
  • 1 Intel (R) Xeon (R) processor E3-1265L V2 @ 2.50 GHz 4/4 cores; 8 threads
  • 2 HPE frame switches – model: PS1810-8G Jumbo frames and Spanning-tree enabled

SSD drives this grade uses consumer grade and is not intended for use in production.

We have the following demo software series:

  • Domain controller, DNS and DHCP server
  • Host: Windows Server 2019 Datacenter Core Edition Structure Number 10.0.17763 with December 2018 Update
    • One storage pool
    • 2 X 512 GB nested bidirectional mirror volumes
    • CSVFS_REFS file system
    • 10 virtual machines (5 virtual machines per node)
    • 2 virtual processors and 2 GB of RAM per virtual machine
    • VM: Windows Server 2016 Datacenter Core Edition, November 2018 update
    • Jumbo Frame is used on all network cards

The following steps describe the network and preset steps:

  • Install the Hyper-V and Failover Clustering roles. Set the necessary Windows Firewall rules, enable Remote Desktop, and set power management settings to high performance. You can use the following set of PowerShell commands to automate this step:
# S2D hyper-converged cluster Pre-Configuration 
$Nodes = "S2D-HV01", "S2D-HV02"

Invoke-Command -ComputerName $Nodes -ScriptBlock {

#Install Hyper-V and Failover Cluster
Install-WindowsFeature Hyper-V, Failover-Clustering, FS-FileServer -IncludeAllSubFeature -IncludeManagementTools -Verbose

#Set Windows Firewall
Set-NetFirewallRule -Group "@firewallapi.dll,-36751" -Profile Any -Enabled true # Remote Shutdown firewall rule
Set-NetFirewallRule -DisplayName 'Windows Remote Management (HTTP-In)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName 'Windows Management Instrumentation (WMI-In)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName 'Remote Volume Management - Virtual Disk Service (RPC)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName 'Remote Volume Management - Virtual Disk Service Loader (RPC)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName 'File and Printer Sharing (Echo Request - ICMPv4-In)' -Enabled True -Direction Inbound -Action Allow -Profile Any
Set-NetFirewallRule -DisplayName 'File and Printer Sharing (Echo Request - ICMPv6-In)' -Enabled True -Direction Inbound -Action Allow -Profile Any
Set-NetFirewallRule -DisplayName 'File and Printer Sharing (SMB-In)' -Enabled True -Direction Inbound -Action Allow -Profile Any

# Enable Remote Desktop
(Get-WmiObject Win32_TerminalServiceSetting -Namespace rootcimv2TerminalServices).SetAllowTsConnections(1,1) | Out-Null
(Get-WmiObject -Class "Win32_TSGeneralSetting" -Namespace rootcimv2TerminalServices -Filter "TerminalName='RDP-tcp'").SetUserAuthenticationRequired(0) | Out-Null
Get-NetFirewallRule -DisplayName "Remote Desktop*" | Set-NetFirewallRule -enabled true

#Set the Windows Power plan to High Performance:
POWERCFG.EXE /S SCHEME_MIN

Restart-Computer -Force
}
  • At this point, we set the memory format of the cluster to “Active Memory Dump”. Starting with Windows Server 2016, Microsoft added a new option for creating memory networks in the event of a system failure. The recommended setting for the fault tolerance cluster is “Clear active memory”. This can be set in the System Control Panel Startup and Recovery dialog box, Advanced System Settings, or you can use Set-ItemProperty cmdlet specifies the value CrashDumpEnabled.
# Set Active Memory Dump on Server Core
Invoke-Command -ComputerName $Nodes -ScriptBlock {
# Configure Active memory dump
Set-ItemProperty –Path HKLM:SystemCurrentControlSetControlCrashControl –Name CrashDumpEnabled –value 1
New-ItemProperty –Path HKLM:SystemCurrentControlSetControlCrashControl -Name FilterPages -Value 1    
Get-ItemProperty –Path HKLM:SystemCurrentControlSetControlCrashControl 
} 
  • At this point, we create Switch embedded grouping (SET) which is a new type of network group that was introduced in Windows Server 2016. We create a connected SET virtual switch for the host as part of the management vNIC, set it as the bandwidth weight, and give it an IP address.
# Create S2D SET Team
Invoke-Command -ComputerName $Nodes -ScriptBlock {
    Param ($Nodes)
    $NICs = Get-NetAdapter
    New-VMSwitch -MinimumBandwidthMode Weight -NetAdapterName $NICs.Name[0],$NICs.Name[1],$NICs.Name[2] `
    -AllowManagementOS $true -EnableEmbeddedTeaming $true -Name "S2D_SET_vSwitch" -Notes "S2D_SET_vSwitch" -Verbose
    $vnic = Get-VMNetworkAdapter -ManagementOS
    Set-VMSwitch S2D_SET_vSwitch -DefaultFlowMinimumBandwidthWeight 30
    Set-VMSwitchTeam -Name "S2D_SET_vSwitch" -LoadBalancingAlgorithm HyperVPort
    
    Rename-NetAdapter -Name "vEthernet (S2D_SET_vSwitch)" -NewName "vEthernet (MGT_HostOS)"
    Set-VMNetworkAdapterVlan -VMNetworkAdapterName "S2D_SET_vSwitch" -Access -VlanId 0 -ManagementOS -Confirm:$false
    Set-VMNetworkAdapter -Name "S2D_SET_vSwitch" -MinimumBandwidthWeight 5 -ManagementOS

    If ($env:COMPUTERNAME -eq "$($Nodes[0])") {
    New-NetIPAddress -InterfaceAlias "vEthernet (MGT_HostOS)" -IPAddress 172.16.20.121 -PrefixLength 24 -DefaultGateway 172.16.20.1 -Type Unicast | Out-Null }
    Else {
    New-NetIPAddress -InterfaceAlias "vEthernet (MGT_HostOS)" -IPAddress 172.16.20.122 -PrefixLength 24 -DefaultGateway 172.16.20.1 -Type Unicast | Out-Null
       }
    
    Set-DnsClientServerAddress -InterfaceAlias "vEthernet (MGT_HostOS)" -ServerAddresses 172.16.20.9
} -ArgumentList $Nodes
  • At this point, we create the host vNICs on the host part as part of the connected SET virtual switch, set their IP address and bandwidth weight accordingly. We are also implementing Virtual Receipt Page Scaling (vRSS).
# Create Host vNICs
$switchName = "S2D_SET_vSwitch"
$LiveMigration = "172.20.42."
$Backup = "172.20.48."
$HVReplica = "172.20.46."
$Cluster =  "172.20.45."
$SMB_A = "172.20.50."
$SMB_B = "172.20.51."
$IP = 16

foreach ($Node in $Nodes) {
$MGT_LiveMigration = $LiveMigration + $IP
$MGT_Backup = $Backup + $IP
$MGT_HVReplica = $HVReplica + $IP
$MGT_Cluster = $Cluster + $IP
$MGT_SMB_A = $SMB_A + $IP
$MGT_SMB_B = $SMB_B + $IP
Invoke-Command -ComputerName $Node -ScriptBlock {
Param ($MGT_LiveMigration, $MGT_Backup, $MGT_HVReplica, $MGT_Cluster, $MGT_SMB_A, $MGT_SMB_B, $switchName)

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_LiveMigration
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_LiveMigration)" -IPAddress $MGT_LiveMigration -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_LiveMigration" -Access -VlanId 42 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_LiveMigration" -MinimumBandwidthWeight 20
Set-DNSClient -InterfaceAlias *Live* -RegisterThisConnectionsAddress $False

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_Backup
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_Backup)" -IPAddress $MGT_Backup -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_Backup" -Access -VlanId 48 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_Backup"  -MinimumBandwidthWeight 10
Set-DNSClient -InterfaceAlias *Backup -RegisterThisConnectionsAddress $False

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_HVReplica
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_HVReplica)" -IPAddress $MGT_HVReplica -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_HVReplica" -Access -VlanId 46 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_HVReplica" -MinimumBandwidthWeight 10
Set-DNSClient -InterfaceAlias *Replica -RegisterThisConnectionsAddress $False

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_Cluster
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_Cluster)" -IPAddress $MGT_Cluster -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_Cluster" -Access -VlanId 45 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_Cluster" -MinimumBandwidthWeight 5
Set-DNSClient -InterfaceAlias *Cluster -RegisterThisConnectionsAddress $False

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_SMB_A
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_SMB_A)" -IPAddress $MGT_SMB_A -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_SMB_A" -Access -VlanId 50 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_SMB_A" -MinimumBandwidthWeight 10
Set-DNSClient -InterfaceAlias *SMB_A -RegisterThisConnectionsAddress $False

Add-VMNetworkAdapter -SwitchName $switchName -ManagementOS -Name MGT_SMB_B
New-NetIPAddress -InterfaceAlias "vEthernet (MGT_SMB_B)" -IPAddress $MGT_SMB_B -PrefixLength 24 -Type Unicast | Out-Null
Set-VMNetworkAdapterVlan -ManagementOS -VMNetworkAdapterName "MGT_SMB_B" -Access -VlanId 51 -Confirm:$false
Set-VMNetworkAdapter -ManagementOS -Name "MGT_SMB_B" -MinimumBandwidthWeight 10
Set-DNSClient -InterfaceAlias *SMB_B -RegisterThisConnectionsAddress $False

Enable-NetAdapterRss -Name *
  } -ArgumentList $MGT_LiveMigration, $MGT_Backup, $MGT_HVReplica, $MGT_Cluster, $MGT_SMB_A, $MGT_SMB_B, $switchName
$IP++
}
  • Next, we remove the DNS registration for Storage, Cluster, Backup, Replica, and Live Migration vNIC devices by running the following PowerShell commands: Entry: DNS registration should only be enabled on the Management Host vNIC.
# Disable DNS registration for Storage, Cluster, Backup and Live Migration network adapters by running the following commandlets:
Invoke-Command -ComputerName $Nodes -ScriptBlock {
Get-DnsClient | Where-Object {$_.InterfaceAlias -notmatch "OS" } | Set-DNSClient -RegisterThisConnectionsAddress $false
}
  • In the final step, we enable Jumbo frames on all vNIC computers by running the following PowerShell commands:
# Configure Jumbo Frame on each network adapter
Invoke-Command -ComputerName $Nodes -ScriptBlock {
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014 
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | FT -AutoSize
} 

In the following steps, we will create an S2D cluster, but before that we will confirm cluster support:

  • Open Windows PowerShell and run the following command:
Test-Cluster -Node $Nodes -Include Inventory, Network, "System Configuration", "Storage Spaces Direct" -Verbose

  • Once the cluster validation is successful, we proceed to create the cluster by running the following command.
# New S2D Cluster
$Cluster = "NINJA-S2DCLU"
New-Cluster -Name $Cluster -Node $Nodes -NoStorage -StaticAddress 172.16.20.120/24 -Verbose
# Configure File Share Witness
Set-ClusterQuorum -Cluster $Cluster -FileShareWitness 172.16.20.152USBDisk1 -Credential $(Get-Credential)

In the following steps, we will enable Direct Storage:

  • Open Windows PowerShell and run the following command to enable Storage Spaces Direct. In this example, we delete Cache because we use a full flash system.
Enable-ClusterS2D -CimSession $Cluster -PoolFriendlyName "NINJA-S2D-HVPOOL" -Confirm:$false -CacheState Disabled -Verbose
Get-ClusterS2D -CimSession $Cluster
  • As mentioned earlier, in this installation we use a full flash system without a cache device, the remaining four drives of each node are used as the capacity level. We can look at the drive stock Windows Control Center.

Using Direct Storage with Nested Flexibility HPE ProLiant MicroServers # S2D # WS2019 2

  • When you enable Storage Spaces Direct, S2D automatically creates two levels of storage space, known as the performance and capacity levels. Enable-ClusterS2D The cmdlet analyzes the devices and determines each level according to a combination of device types and flexibility (mirror and parity). In other words, the details and flexibility of the storage level depend on the storage devices of the system and thus vary from system to system. Storage level models for the model Nested flexibility is not created by default. First, we create new recording rate models using mirrors and parities New-StorageTier cmdlet, and then specify either Media Type HDD or SSD. In this example, we only use SSDs. At the time of this writing, PowerShell is the only way to create nested resilient disks. To create storage-level templates, use the following PowerShell commands:
# For mirror SSD
New-StorageTier -CimSession $Cluster -StoragePoolFriendlyName *S2D* -FriendlyName NestedMirror -ResiliencySettingName Mirror -MediaType SSD -NumberOfDataCopies 4

# For parity SSD
New-StorageTier -CimSession $Cluster -StoragePoolFriendlyName *S2D* -FriendlyName NestedParity -ResiliencySettingName Parity -MediaType SSD -NumberOfDataCopies 2 `
-PhysicalDiskRedundancy 1 -NumberOfGroups 1 -FaultDomainAwareness StorageScaleUnit -ColumnIsolation PhysicalDisk
  • You can ensure that nested flexibility levels are created successfully, by using Get-StorageTier cmdlet.

Using Direct Storage with Nested Flexibility HPE ProLiant MicroServers # S2D # WS2019 3

  • You can determine the supported storage size (NestedMirror and NestedParity) for each level by using the following PowerShell commands.
# Get Supported storage size for NestedMirror tier
Get-StorageTierSupportedSize -FriendlyName NestedMirror -CimSession $Cluster | `
Select @{l="TierSizeMin(GB)";e={$_.TierSizeMin/1GB}}, @{l="CapacityTierSizeMax(TB)";e={$_.TierSizeMax/1TB}}, @{l="TierSizeDivisor(GB)";e={$_.TierSizeDivisor/1GB}}

# Get Supported storage size for NestedParity tier
Get-StorageTierSupportedSize -FriendlyName NestedParity -CimSession $Cluster | `
Select @{l="TierSizeMin(GB)";e={$_.TierSizeMin/1GB}}, @{l="CapacityTierSizeMax(TB)";e={$_.TierSizeMax/1TB}}, @{l="TierSizeDivisor(GB)";e={$_.TierSizeDivisor/1GB}}
  • As you can see from the following screenshot and based on this example, we have about 1.8 TB of capacity Nested mirror and 2.4 TB Nested in Parity level. Comparing the capacity efficiency with a standard bidirectional mirror to nested bidirectional mirrors gives 50% efficiency (~ 3.6 TB), while with nested bidirectional mirrors 25% (~ 1.8 TB). As a side note: nested bi-directional mirrors achieve 25% capacity efficiency, which is the lowest possible Direct storage modeshowever, nested parity achieves better capacity efficiency, about 35-40%, and depends on two factors: the number of capacity stations on each server, and the volume of the combination of mirror and parity that you specify when creating the volume.

Using Direct Storage with Nested Flexibility HPE ProLiant MicroServers # S2D # WS2019 4

  • In this example, we create two nested bidirectional mirror volumes (one per node) that give us better performance than nested parity. To use a nested bidirectional mirror, you must use New volume cmdlet reference Nested mirror Level model according to the following example, and then specify the volume size.
# Created Nested two-way mirror 512GB
$Nodes = "NINJA-S2D-HV01", "NINJA-S2D-HV02"
Foreach ($Node in $Nodes) {
New-Volume -CimSession $Cluster -FriendlyName $Node -StoragePoolFriendlyName *S2D* -StorageTierFriendlyNames NestedMirror `
-StorageTierSizes 512GB -FileSystem CSVFS_ReFS -Verbose
}
  • Note that volumes using nested flexibility are displayed Windows Control Center with a clear mark, as shown in the screenshot below. Microsoft is moving away from the traditional Failover Cluster Manager (FCM) console, you don’t see nested flexibility in FCM. Once the disks are created, you can manage and track them using Windows Control Center just like any other Space Spaces Direct disc.

Performing storage with nested flexibility HPE ProLiant MicroServers # S2D # WS2019 5

  • DISKSPD version 2.0.21 workload generator
  • VMFleet Workload Orchestra

A total of 160 kb IOPS – read delay @ 0.2 ms and enter delay 3 ms

Each virtual machine is configured in the following ways:

4K IO size
10 GB working group
100% read and 0% written
No storage QoS
No RDMA
No deduplication

Using Direct Storage with Nested Flexibility HPE ProLiant MicroServers # S2D # WS2019 6

A total of 12 kb IOPS – read delay @ 32 ms and enter delay @ 73 ms

Each virtual machine is configured in the following ways:

4K IO size
10 GB working group
70% read and 30% write
No storage QoS
No RDMA
No deduplication

Using Direct Storage with Nested Flexibility HPE ProLiant MicroServers # S2D # WS2019 7

In this article, we showed how storage can be deployed with the flexibility of nested bidirectional mirrors at two nodes using HPE ProLiant MicroServers. This setting is only used in laboratory and test environments and is not supported in production. For more information on nested resilience, see Microsoft Help here.

When using nested flexibility, two factors must be considered Windows Server 2019. First, you see a lower writing IOPS than the classic bidirectional mirror, 100% read, we got pretty much the same performance as the previous deployment on the same hardware with Windows Server 2016This is because all numbers serve locally within a node, but we still observed very poor performance with 30% write because nested flexibility requires writing an additional copy to each node. Second, you have less usable capacity that can be used for production loads, but nested flexibility gives you higher uptime and availability in return. Storage is cheap, but downtime is expensive !!!

I recommend a very good review Article by my friend and dear friend Darryl van der Peijl on the comparison of bidirectional mirror and nested flexibility.

__
Thanks for locking my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

LEAVE A REPLY

Please enter your comment!
Please enter your name here