Users Guide
Table Of Contents
- Table of Contents
- Preface
- 1 Functionality and Features
- 2 Configuring Teaming in Windows Server
- 3 Virtual LANs in Windows
- 4 Installing the Hardware
- 5 Manageability
- 6 Boot Agent Driver Software
- 7 Linux Driver Software
- Introduction
- Limitations
- Packaging
- Installing Linux Driver Software
- Load and Run Necessary iSCSI Software Components
- Unloading or Removing the Linux Driver
- Patching PCI Files (Optional)
- Network Installations
- Setting Values for Optional Properties
- Driver Defaults
- Driver Messages
- bnx2x Driver Messages
- bnx2i Driver Messages
- BNX2I Driver Sign-on
- Network Port to iSCSI Transport Name Binding
- Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
- Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
- Exceeds Maximum Allowed iSCSI Connection Offload Limit
- Network Route to Target Node and Transport Name Binding Are Two Different Devices
- Target Cannot Be Reached on Any of the C-NIC Devices
- Network Route Is Assigned to Network Interface, Which Is Down
- SCSI-ML Initiated Host Reset (Session Recovery)
- C-NIC Detects iSCSI Protocol Violation - Fatal Errors
- C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
- Driver Puts a Session Through Recovery
- Reject iSCSI PDU Received from the Target
- Open-iSCSI Daemon Handing Over Session to Driver
- bnx2fc Driver Messages
- BNX2FC Driver Signon
- Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
- Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
- No Valid License to Start FCoE
- Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
- Session Offload Failures
- Session Upload Failures
- Unable to Issue ABTS
- Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
- Unable to Issue I/O Request Due to Session Not Ready
- Drop Incorrect L2 Receive Frames
- Host Bus Adapter and lport Allocation Failures
- NPIV Port Creation
- Teaming with Channel Bonding
- Statistics
- Linux iSCSI Offload
- 8 VMware Driver Software
- Introduction
- Packaging
- Download, Install, and Update Drivers
- Driver Parameters
- FCoE Support
- iSCSI Support
- 9 Windows Driver Software
- Supported Drivers
- Installing the Driver Software
- Modifying the Driver Software
- Repairing or Reinstalling the Driver Software
- Removing the Device Drivers
- Viewing or Changing the Properties of the Adapter
- Setting Power Management Options
- Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
- 10 Citrix XenServer Driver Software
- 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
- iSCSI Boot Setup
- Configuring the iSCSI Target
- Configuring iSCSI Boot Parameters
- MBA Boot Protocol Configuration
- iSCSI Boot Configuration
- Enabling CHAP Authentication
- Configuring the DHCP Server to Support iSCSI Boot
- DHCP iSCSI Boot Configuration for IPv4
- DHCP iSCSI Boot Configuration for IPv6
- Configuring the DHCP Server
- Preparing the iSCSI Boot Image
- Booting
- Other iSCSI Boot Considerations
- Troubleshooting iSCSI Boot
- iSCSI Crash Dump
- iSCSI Offload in Windows Server
- iSCSI Boot
- 12 Marvell Teaming Services
- Executive Summary
- Teaming Mechanisms
- Teaming and Other Advanced Networking Properties
- General Network Considerations
- Application Considerations
- Troubleshooting Teaming Problems
- Frequently Asked Questions
- Event Log Messages
- 13 NIC Partitioning and Bandwidth Management
- 14 Fibre Channel Over Ethernet
- Overview
- FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
- Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
- Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
- Provisioning Storage Access in the SAN
- One-Time Disabled
- Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
- Linux FCoE Boot Installation
- VMware ESXi FCoE Boot Installation
- Booting from SAN After Installation
- Configuring FCoE
- N_Port ID Virtualization (NPIV)
- 15 Data Center Bridging
- 16 SR-IOV
- 17 Specifications
- 18 Regulatory Information
- 19 Troubleshooting
- Hardware Diagnostics
- Checking Port LEDs
- Troubleshooting Checklist
- Checking if Current Drivers Are Loaded
- Running a Cable Length Test
- Testing Network Connectivity
- Microsoft Virtualization with Hyper-V
- Removing the Marvell 57xx and 57xxx Device Drivers
- Upgrading Windows Operating Systems
- Marvell Boot Agent
- Linux
- NPAR
- Kernel Debugging Over Ethernet
- Miscellaneous
- A Revision History
12–Marvell Teaming Services
Teaming Mechanisms
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 160 Copyright © 2021 Marvell
Outbound Traffic Flow
The Marvell intermediate driver manages the outbound traffic flow for all teaming
modes. For outbound traffic, every packet is first classified into a flow, and then
distributed to the selected physical adapter for transmission. The flow
classification involves an efficient hash computation over known protocol fields.
The resulting hash value is used to index into an Outbound Flow Hash Table. The
selected Outbound Flow Hash Entry contains the index of the selected physical
adapter responsible for transmitting this flow. The source MAC address of the
packets will then be modified to the MAC address of the selected physical
adapter. The modified packet is then passed to the selected physical adapter for
transmission.
The outbound TCP and UDP packets are classified using Layer 3 and Layer 4
header information. This scheme improves the load distributions for popular
Internet protocol services using well-known ports such as HTTP and FTP.
Therefore, QLASP performs load balancing on a TCP session basis and not on a
packet-by-packet basis.
In the Outbound Flow Hash Entries, statistics counters are also updated after
classification. The load-balancing engine uses these counters to periodically
distribute the flows across teamed ports. The outbound code path has been
designed to achieve best possible concurrency where multiple concurrent
accesses to the Outbound Flow Hash Table are allowed.
For protocols other than TCP/IP, the first physical adapter will always be selected
for outbound packets. The exception is Address Resolution Protocol (ARP), which
is handled differently to achieve inbound load balancing.
Inbound Traffic Flow (SLB Only)
The Marvell intermediate driver manages the inbound traffic flow for the SLB
teaming mode. Unlike outbound load balancing, inbound load balancing can only
be applied to IP addresses that are located in the same subnet as the
load-balancing server. Inbound load balancing exploits a unique characteristic of
address resolution protocol (RFC0826), in which each IP host uses its own ARP
cache to encapsulate the IP datagram into an Ethernet frame. QLASP carefully
manipulates the ARP response to direct each IP host to send the inbound IP
packet to the physical adapter that you want. Therefore, inbound load balancing is
a plan-ahead scheme based on statistical history of the inbound flows. New
connections from a client to the server will always occur over the primary physical
adapter (because the ARP Reply generated by the operating system protocol
stack will always associate the logical IP address with the MAC address of the
primary physical adapter).
Like the outbound case, there is an Inbound Flow Head Hash Table. Each entry
inside this table has a singly linked list and each link (Inbound Flow Entries)
represents an IP host located in the same subnet.