I recently upgraded my Hyper-V host and network infrastructure to 10GB.
As soon as I transferred the Hyper-V virtual switch to the LBFO team with 2X10Gb, I get the following error:
If we look deeper into the event log, we can see that the cause is self-evident.
So what is sum queue mode and what is mini queue mode?
Some time ago I posted a detailed article Enabling and Configuring VMQ / dVMQ in Windows Server 2012 R2 with a Network Adapter for Less Than 10 Gigs.
Make sure you check article before resuming the resolution.
As a quick summary, the SUM-Queues mode is the VMQ number of all physical network cards participating in the team, but the MIN-Queues mode is the minimum number of all physical network cards. team.
The question is, why don’t we get the same error when you have 1GB network adapters? whereas when using 1 GB network adapters, the VMQ is disabled by default because Microsoft does not see a performance benefit for the 1 GB network card VMQ; and mononuclear can keep up with ~ 3.5GB performance without any problems.
To enable VMQ on 1 GB network cards, see this article.
In this scenario, I use 2X10Gb adapters that are configured as switch-independent group mode and dynamic distribution mode.
|Distribution mode → Team mode ↓||Address Hash modes||Hyper-V port||Dynamic|
|Switch to independent||Minijonot||The sum of the queues||The sum of the queues|
If you look at the table above, you can see that I use Summajono mode.
First, we need to check the VMQ number of my LBFO team.
As you can see, VMQ is enabled True, but the Base and Max processors on both 10GB adapters are set to 0 and Max 16Therefore, the processor arrays are overlapping because the LBFO team is set up on the sum of the queues, the network adapters in the group must use the overlapping processor arrays.
I have one combined virtual switch here with 2X10Gb NICs and 63 queues for each NIC used in the host vNIC and VMNICs in the VMN, so the total number of VMQs in the LBFO team is 126.
You may be wondering why 63 queues instead of 64, in this scenario, is 128 (64 NIC1 + 64 NIC2), but the system reserves one VMQ queue, so you see 63 per port and 126 per LBFO team.
Before we start configuring a VMQ for each adapter, we need to determine if Hyperlanka is enabled on the system by running the following cmdlet:
As you can see, we have NumberOfLogicalProcessors as twice as NumberOfCores, so my server has two 8-core processors and HT is running, we can see 32 LPs in Task Manager.
Let’s start configuring the virtual machine queue for both adapters.
Because my team is in Sum queues In mode, team members’ processors should overlap or minimize duplication. For example, in my scenario, I have a 16-core host (32 logical processors) with a 2X10Gbps NIC team, I set the first NIC1 to use a base processor of 0 and use a processor of up to 8 cores (so this NIC uses a processor of 0, 2, 4, 6 , 8, 10, 12, 14 for VMQ); the second NIC2 is set to use the basic processor 16 and also to use 8 cores (so this NIC uses the processor 16, 18, 20, 22, 24, 26, 28, 30 for the VMQ).
As a best practice, make sure that the base processor is not set to 0 because the first kernel, logical 0, is reserved for default (non-RSS and non-DVMQ) network processing.
Open PowerShell and Set-NetAdapterVmq respectively for each network card:
Now check that VMQ is enabled:
As you can see now, baseVmqProcessor NIC1 has 0 and baseVmqProcessor There are 16 for NIC2.
So what we have done in this case, 126 queues are divided into 16 processors, in the example the first network card has 63 queues so it can spread anywhere from processor 0-15 and the second network card from processor 16 to processor 31 Remember that all 16 processors are used because I have more queues rather than processors. However, if you have 8 queues per network card, for example, up to 8 processors will not be used because there are only 8 queues.
But when I set the VMQ, the error didn’t go away.
As I mentioned at the beginning of this article, I use it Combined team for vmNIC (VM) and host vNIC as well.
If we look at the host’s RSS, we can see that the Base and Max processors on the NIC1 are set 0 and NIC2 is set to 16 likewise, therefore, processor arrays overlap with VMQ.
As a side note and best practice, you should split the host vNICs from the vmNICs with two separate physical adapters (teams).
In this case, we roughly share RSS and Dynamic VMQ 50/50.
RSS uses 16 CPU0 (0-15) logical processors. The remaining 16 logical processors (16-31) of CPM1 are used by DVMQ.
The settings for the two 10 GB network cards again depend on whether the network cards are grouped in queue sum mode or mini-queue mode. NIC1 (Fiber01) and NIC2 (Fiber02) are in switch-independent and dynamic mode, so they are in queue sum mode. This means that team network cards must use duplicate processor arrays. The settings for the two 10 GB network cards are therefore described as follows:
Set-NetAdapterRss “Fiber01” –BaseProcessorNumber 0 –MaxProcessors 4 Set-NetAdapterRss “Fiber02” –BaseProcessorNumber 8 –MaxProcessors 4 Set-NetAdapterVmq ”Fiber01” –BaseProcessorNumber 16 –MaxProcessors 4 Set-NetAdapterVmq “Fiber02” –BaseProcessorNumber 24 –MaxProcessors 4
Note: According to Microsoft, as soon as you connect a Hyper-V virtual switch to the LBFO team, RSS will be disabled on the host and VMQ will be enabled, that is Set-NetAdapterRss there really is no effect here and Set-NetAdapterVmq is a priority, so if we look again, we can see that RSS is in line with VMQ.
Next, you must restart the virtual machines for the new settings to take effect, because each vmNIC is assigned one queue when the virtual machine is started.
Last but not least, you can confirm this by running Get-NetAdapterVmqQueue and this shows all the queues assigned to them via vmNICs for all virtual machines on that Hyper-V host.
Finally, when VMQ and RSS are set up correctly, the error disappears!
Hopefully this will help.
Enjoy your day!