Users Guide
Table Of Contents
- Table of Contents
- Preface
- 1 Functionality and Features
- 2 Configuring Teaming in Windows Server
- 3 Virtual LANs in Windows
- 4 Installing the Hardware
- 5 Manageability
- 6 Boot Agent Driver Software
- 7 Linux Driver Software
- Introduction
- Limitations
- Packaging
- Installing Linux Driver Software
- Load and Run Necessary iSCSI Software Components
- Unloading or Removing the Linux Driver
- Patching PCI Files (Optional)
- Network Installations
- Setting Values for Optional Properties
- Driver Defaults
- Driver Messages
- bnx2x Driver Messages
- bnx2i Driver Messages
- BNX2I Driver Sign-on
- Network Port to iSCSI Transport Name Binding
- Driver Completes Handshake with iSCSI Offload-enabled C-NIC Device
- Driver Detects iSCSI Offload Is Not Enabled on the C-NIC Device
- Exceeds Maximum Allowed iSCSI Connection Offload Limit
- Network Route to Target Node and Transport Name Binding Are Two Different Devices
- Target Cannot Be Reached on Any of the C-NIC Devices
- Network Route Is Assigned to Network Interface, Which Is Down
- SCSI-ML Initiated Host Reset (Session Recovery)
- C-NIC Detects iSCSI Protocol Violation - Fatal Errors
- C-NIC Detects iSCSI Protocol Violation—Non-FATAL, Warning
- Driver Puts a Session Through Recovery
- Reject iSCSI PDU Received from the Target
- Open-iSCSI Daemon Handing Over Session to Driver
- bnx2fc Driver Messages
- BNX2FC Driver Signon
- Driver Completes Handshake with FCoE Offload Enabled C-NIC Device
- Driver Fails Handshake with FCoE Offload Enabled C-NIC Device
- No Valid License to Start FCoE
- Session Failures Due to Exceeding Maximum Allowed FCoE Offload Connection Limit or Memory Limits
- Session Offload Failures
- Session Upload Failures
- Unable to Issue ABTS
- Unable to Recover the IO Using ABTS (Due to ABTS Timeout)
- Unable to Issue I/O Request Due to Session Not Ready
- Drop Incorrect L2 Receive Frames
- Host Bus Adapter and lport Allocation Failures
- NPIV Port Creation
- Teaming with Channel Bonding
- Statistics
- Linux iSCSI Offload
- 8 VMware Driver Software
- Introduction
- Packaging
- Download, Install, and Update Drivers
- Driver Parameters
- FCoE Support
- iSCSI Support
- 9 Windows Driver Software
- Supported Drivers
- Installing the Driver Software
- Modifying the Driver Software
- Repairing or Reinstalling the Driver Software
- Removing the Device Drivers
- Viewing or Changing the Properties of the Adapter
- Setting Power Management Options
- Configuring the Communication Protocol to Use with QCC GUI, QCC PowerKit, and QCS CLI
- 10 Citrix XenServer Driver Software
- 11 iSCSI Protocol
- iSCSI Boot
- Supported Operating Systems for iSCSI Boot
- iSCSI Boot Setup
- Configuring the iSCSI Target
- Configuring iSCSI Boot Parameters
- MBA Boot Protocol Configuration
- iSCSI Boot Configuration
- Enabling CHAP Authentication
- Configuring the DHCP Server to Support iSCSI Boot
- DHCP iSCSI Boot Configuration for IPv4
- DHCP iSCSI Boot Configuration for IPv6
- Configuring the DHCP Server
- Preparing the iSCSI Boot Image
- Booting
- Other iSCSI Boot Considerations
- Troubleshooting iSCSI Boot
- iSCSI Crash Dump
- iSCSI Offload in Windows Server
- iSCSI Boot
- 12 Marvell Teaming Services
- Executive Summary
- Teaming Mechanisms
- Teaming and Other Advanced Networking Properties
- General Network Considerations
- Application Considerations
- Troubleshooting Teaming Problems
- Frequently Asked Questions
- Event Log Messages
- 13 NIC Partitioning and Bandwidth Management
- 14 Fibre Channel Over Ethernet
- Overview
- FCoE Boot from SAN
- Preparing System BIOS for FCoE Build and Boot
- Preparing Marvell Multiple Boot Agent for FCoE Boot (CCM)
- Preparing Marvell Multiple Boot Agent for FCoE Boot (UEFI)
- Provisioning Storage Access in the SAN
- One-Time Disabled
- Windows Server 2016/2019/Azure Stack HCI FCoE Boot Installation
- Linux FCoE Boot Installation
- VMware ESXi FCoE Boot Installation
- Booting from SAN After Installation
- Configuring FCoE
- N_Port ID Virtualization (NPIV)
- 15 Data Center Bridging
- 16 SR-IOV
- 17 Specifications
- 18 Regulatory Information
- 19 Troubleshooting
- Hardware Diagnostics
- Checking Port LEDs
- Troubleshooting Checklist
- Checking if Current Drivers Are Loaded
- Running a Cable Length Test
- Testing Network Connectivity
- Microsoft Virtualization with Hyper-V
- Removing the Marvell 57xx and 57xxx Device Drivers
- Upgrading Windows Operating Systems
- Marvell Boot Agent
- Linux
- NPAR
- Kernel Debugging Over Ethernet
- Miscellaneous
- A Revision History
12–Marvell Teaming Services
Executive Summary
Doc No. BC0054508-00 Rev. R
January 21, 2021 Page 149 Copyright © 2021 Marvell
SLB receive load balancing attempts to load balance incoming traffic for client
machines across physical ports in the team. It uses a modified gratuitous ARP to
advertise a different MAC address for the team IP address in the sender physical
and protocol address. The G-ARP is unicast with the MAC and IP Address of a
client machine in the target physical and protocol address, respectively. This
action causes the target client to update its ARP cache with a new MAC address
map to the team IP address. G-ARPs are not broadcast because this would cause
all clients to send their traffic to the same port. As a result, the benefits achieved
through client load balancing would be eliminated, and could cause out-of-order
frame delivery. This receive load-balancing scheme works as long as all clients
and the teamed system are on the same subnet or broadcast domain.
When the clients and the system are on different subnets, and incoming traffic has
to traverse a router, the received traffic destined for the system is not load
balanced. The physical adapter that the intermediate driver has selected to carry
the IP flow carries all of the traffic. When the router sends a frame to the team IP
address, it broadcasts an ARP request (if not in the ARP cache). The server
software stack generates an ARP reply with the team MAC address, but the
intermediate driver modifies the ARP reply and sends it over a specific physical
adapter, establishing the flow for that session.
The reason is that ARP is not a routable protocol. It does not have an IP header
and therefore, is not sent to the router or default gateway. ARP is only a local
subnet protocol. In addition, because the G-ARP is not a broadcast packet, the
router will not process it and will not update its own ARP cache.
The only way that the router would process an ARP that is intended for another
network device is if it has Proxy ARP enabled and the host has no default
gateway. This situation is very rare and not recommended for most applications.
Transmit traffic through a router is load balanced because transmit load balancing
is based on the source and destination IP address and TCP/UDP port number.
Because routers do not alter the source and destination IP address, the load
balancing algorithm works as intended.
Configuring routers for hot standby routing protocol (HSRP) does not allow for
receive load balancing to occur in the adapter team. In general, HSRP allows for
two routers to act as one router, advertising a virtual IP and virtual MAC address.
One physical router is the active interface while the other is standby. Although
HSRP can also load share nodes (using different default gateways on the host
nodes) across multiple routers in HSRP groups, it always points to the primary
MAC address of the team.