Users Guide
15–Data Center Bridging
Data Center Bridging in Windows Server 2012 and Later
243 BC0054508-00 J
In NIC partitioned enabled configurations, ETS (if operational) overrides the
Bandwidth Weights assigned to each function. Transmission selection
weights are per protocol per ETS settings instead. Maximum bandwidths per
function are still honored in the presence of ETS.
In the absence of an iSCSI or FCoE application TLV advertised through the
DCBX peer, the adapter will use the settings taken from the local Admin
MIB.
Data Center Bridging in Windows Server 2012
and Later
Starting with Windows Server 2012, Microsoft introduced a new way of managing
quality of service (QoS) at the OS level. The two main aspects of Windows QoS
include:
A vendor-independent method for managing DCB settings on NICs, both
individually and across an entire domain. The management interface is
provided by Windows PowerShell Cmdlets.
The ability to tag specific types of Layer 2 networking traffic, such as SMB
traffic, so that hardware bandwidth can be managed using ETS.
All QLogic Converged Network Adapters that support DCB are capable of
interoperating with Windows QoS.
To enable the QoS Windows feature, ensure that the QLogic device is
DCB-capable:
1. Using CCM or QCS, enable data center bridging.
2. Using Windows Device Manager or QCS, select the NDIS driver, display
Advanced properties, and enable the Quality of Service property.
When QoS is enabled, administrative control over DCB-related settings is
relinquished to the operating system (that is, QCS can no longer be used for
administrative control of the DCB). You can use PowerShell to configure and
manage the QoS feature. Using PowerShell Cmdlets, you can configure various
QoS-related parameters, such as traffic classification, priority flow control, and
traffic class throughput scheduling.
For more information on using PowerShell Cmdlets, see the DCB Windows
PowerShell User Scripting Guide in the Microsoft Technet Library.