Users Guide
Table Of Contents
- Table of Contents
- Preface
- 1 Functionality and Features
- 2 Configuring Teaming in Windows Server
- 3 Virtual LANs in Windows
- 4 Installing the Hardware
- 5 Manageability
- 6 Boot Agent Driver Software
- 7 Linux Driver Software
- Introduction
- Limitations
- Packaging
- Installing Linux Driver Software
- Load and Run Necessary iSCSI Software Components
- Unloading or Removing the Linux Driver
- Patching PCI Files (Optional)
- Network Installations
- Setting Values for Optional Properties
- Driver Defaults
- Driver Messages
- bnx2x Driver Messages
- bnx2i Driver Messages
- BNX2I Driver Sign-on
- Network Port to iSCSI Transport Name Binding
- Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
- Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
- Exceeds Maximum Allowed iSCSI Connection Offload Limit
- Network Route to Target Node and Transport Name Binding Are Two Different Devices
- Target Cannot Be Reached on Any of the C-NIC Devices
- Network Route Is Assigned to Network Interface, Which Is Down
- SCSI-ML Initiated Host Reset (Session Recovery)
- C-NIC Detects iSCSI Protocol Violation - Fatal Errors
- C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
- Driver Puts a Session Through Recovery
- Reject iSCSI PDU Received from the Target
- Open-iSCSI Daemon Handing Over Session to Driver
- bnx2fc Driver Messages
- BNX2FC Driver Signon
- Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
- Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
- No Valid License to Start FCoE
- Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
- Session Offload Failures
- Session Upload Failures
- Unable to Issue ABTS
- Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
- Unable to Issue I/O Request Due to Session Not Ready
- Drop Incorrect L2 Receive Frames
- Host Bus Adapter and lport Allocation Failures
- NPIV Port Creation
- Teaming with Channel Bonding
- Statistics
- Linux iSCSI Offload
- 8 VMware Driver Software
- Introduction
- Packaging
- Download, Install, and Update Drivers
- Driver Parameters
- FCoE Support
- iSCSI Support
- 9 Windows Driver Software
- Supported Drivers
- Installing the Driver Software
- Modifying the Driver Software
- Repairing or Reinstalling the Driver Software
- Removing the Device Drivers
- Viewing or Changing the Properties of the Adapter
- Setting Power Management Options
- Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
- 10 Citrix XenServer Driver Software
- 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
- iSCSI Boot Setup
- Configuring the iSCSI Target
- Configuring iSCSI Boot Parameters
- MBA Boot Protocol Configuration
- iSCSI Boot Configuration
- Enabling CHAP Authentication
- Configuring the DHCP Server to Support iSCSI Boot
- DHCP iSCSI Boot Configuration for IPv4
- DHCP iSCSI Boot Configuration for IPv6
- Configuring the DHCP Server
- Preparing the iSCSI Boot Image
- Booting
- Other iSCSI Boot Considerations
- Troubleshooting iSCSI Boot
- iSCSI Crash Dump
- iSCSI Offload in Windows Server
- iSCSI Boot
- 12 Marvell Teaming Services
- Executive Summary
- Teaming Mechanisms
- Teaming and Other Advanced Networking Properties
- General Network Considerations
- Application Considerations
- Troubleshooting Teaming Problems
- Frequently Asked Questions
- Event Log Messages
- 13 NIC Partitioning and Bandwidth Management
- 14 Fibre Channel Over Ethernet
- Overview
- FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
- Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
- Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
- Provisioning Storage Access in the SAN
- One-Time Disabled
- Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
- Linux FCoE Boot Installation
- VMware ESXi FCoE Boot Installation
- Booting from SAN After Installation
- Configuring FCoE
- N_Port ID Virtualization (NPIV)
- 15 Data Center Bridging
- 16 SR-IOV
- 17 Specifications
- 18 Regulatory Information
- 19 Troubleshooting
- Hardware Diagnostics
- Checking Port LEDs
- Troubleshooting Checklist
- Checking if Current Drivers Are Loaded
- Running a Cable Length Test
- Testing Network Connectivity
- Microsoft Virtualization with Hyper-V
- Removing the Marvell 57xx and 57xxx Device Drivers
- Upgrading Windows Operating Systems
- Marvell Boot Agent
- Linux
- NPAR
- Kernel Debugging Over Ethernet
- Miscellaneous
- A Revision History
12–Marvell Teaming Services
Executive Summary
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 150 Copyright © 2021 Marvell
Generic Trunking
Generic Trunking is a switch-assisted teaming mode and requires configuring
ports at both ends of the link: server interfaces and switch ports. This port
configuration is often referred to as Cisco Fast EtherChannel or Gigabit
EtherChannel. In addition, generic trunking supports similar implementations by
other switch OEMs such as Extreme Networks Load Sharing and Bay Networks or
IEEE 802.3ad Link Aggregation static mode. In this mode, the team advertises
one MAC Address and one IP Address when the protocol stack responds to ARP
Requests. In addition, each physical adapter in the team uses the same team
MAC address when transmitting frames. Use of the address is possible because
the switch at the other end of the link is aware of the teaming mode and will
handle the use of a single MAC address by every port in the team. The forwarding
table in the switch will reflect the trunk as a single virtual port.
In this teaming mode, the intermediate driver controls load balancing and failover
for outgoing traffic only, while incoming traffic is controlled by the switch firmware
and hardware. Most switches implement an XOR hashing of the source and
destination MAC address.
Link Aggregation (IEEE 802.3ad LACP)
Link aggregation is similar to generic trunking except that it uses the link
aggregation control protocol (LACP) to negotiate the ports that will make up the
team. LACP must be enabled at both ends of the link for the team to be
operational. If LACP is not available at both ends of the link, 802.3ad provides a
manual aggregation that only requires both ends of the link to be in a link up state.
Because manual aggregation provides for the activation of a member link without
performing the LACP message exchanges, it should not be considered as reliable
and robust as an LACP negotiated link. LACP automatically determines which
member links can be aggregated and then aggregates them. It provides for the
controlled addition and removal of physical links for the link aggregation so that no
frames are lost or duplicated. The removal of aggregate link members is provided
by the marker protocol that can be optionally enabled for Link Aggregation Control
Protocol (LACP) enabled aggregate links.
The Link Aggregation group advertises a single MAC address for all the ports in
the trunk. The MAC address of the Aggregator can be the MAC addresses of one
of the MACs that make up the group. LACP and marker protocols use a multicast
destination address.
NOTE
Generic trunking is not supported on iSCSI offload adapters.