Intel® Ethernet Adapters and Devices User Guide
Overview Welcome to the User's Guide for Intel® Ethernet Adapters and devices. This guide covers hardware and software installation, setup procedures, and troubleshooting tips for Intel network adapters, connections, and other devices. Installing the Network Adapter If you are installing a network adapter, follow this procedure from step 1. If you are upgrading the driver software, start with step 5 . 1. Make sure that you are installing the latest driver software for your adapter.
Advanced software and drivers are supported on the following operating systems: l Microsoft Windows 7 l Microsoft Windows 8 l Microsoft Windows 8.1 l Microsoft Windows 10 l Linux*, v2.4 kernel or higher l FreeBSD* Supported Intel® 64 Architecture Operating Systems l Microsoft* Windows* 7 l Microsoft Windows 8 l Microsoft Windows 8.
Cabling Requirements Intel Gigabit Adapters Fiber Optic Cables l Laser wavelength: 850 nanometer (not visible). l SC Cable type: l l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters. l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters. l Connector type: SC. LC Cable type: l Multi-mode fiber with 50 micron core diameter; maximum length is 550 meters. l Multi-mode fiber with 62.5 micron core diameter; maximum length is 275 meters.
l l To ensure compliance with CISPR 24 and the EU's EN55024, Intel® 10 Gigabit Server Adapters and Connections should be used only with CAT 6a shielded cables that are properly terminated according to the recommendations in EN50174-2. 10 Gigabit Ethernet over SFP+ Direct Attached Cable (Twinaxial) l Length is 10 meters max. Intel 40 Gigabit Adapters Fiber Optic Cables l Laser wavelength: 850 nanometer (not visible).
Installing Linux* Drivers from Source Code 1. Download and expand the base driver tar file. 2. Compile the driver module. 3. Install the module using the modprobe command. 4. Assign an IP address using the ifconfig command. Optimizing Performance You can configure Intel network adapter advanced settings to help optimize server performance.
l l l Increase the allocation size of Driver Resources (transmit/receive buffers). However, most TCP traffic patterns work best with the transmit buffer set to its default value, and the receive buffer set to its minimum value. When passing traffic on multiple network ports using an I/O application that runs on most or all of the cores in your system, consider setting the CPU Affinity for that application to fewer cores.
Optimized for CPU utilization l l Maximize Interrupt Moderation Rate. Keep the default setting for the number of Receive Descriptors; avoid setting large numbers of Receive Descriptors. l Decrease RSS Queues. l In Hyper-V environments, decrease the Max number of RSS CPUs. Remote Storage The remote storage features allow you to access a SAN or other networked storage using Ethernet protocols. This includes Data Center Bridging (DCB), iSCSI over DCB, and Fibre Channel over Ethernet (FCoE).
Intel® Ethernet FCoE (Fibre Channel over Ethernet) Fibre Channel over Ethernet (FCoE) is the encapsulation of standard Fibre Channel (FC) protocol frames as data within standard Ethernet frames. This link-level encapsulation, teamed with an FCoE-aware Ethernet-toFC gateway, acts to extend an FC fabric to include Ethernet-based host connectivity. The FCoE specification focuses on encapsulation of FC frames specific to storage class traffic, as defined by the Fibre Channel FC-4 FCP specification.
Point to Point (PT2PT) Mode In Point to Point mode, there are only two ENodes, and they are connected either directly or through a lossless Ethernet switch:
MultiPoint Mode If more than two ENodes are detected in the VN2VN fabric, then all nodes should operate in Multipoint mode: Enabling VN2VN in Microsoft Windows To enable VN2VN in Microsoft Windows: 1. Start Windows Device Manager. 2. Open the appropriate FCoE miniport property sheet (generally under Storage controllers) and click on the Advanced tab. 3. Select the VN2VN setting and choose "Enable." Remote Boot Remote Boot allows you to boot a system using only an Ethernet adapter.
Intel® Ethernet iSCSI Boot Intel® Ethernet iSCSI Boot provides the capability to boot a client system from a remote iSCSI disk volume located on an iSCSI-based Storage Area Network (SAN). NOTE: Release 20.6 is the last release in which Intel® Ethernet iSCSI Boot supports Intel® Ethernet Desktop Adapters and Network Connections. Starting with Release 20.7, Intel Ethernet iSCSI Boot no longer supports Intel Ethernet Desktop Adapters and Network Connections.
the interface (for example, setting LAA on the interface, changing the primary adapter on a team, etc.), will cause the VNIC to lose connectivity. In order to prevent this loss of connectivity, Intel® PROSet will not allow you to change settings that change the MAC address. NOTES: l l l l l If Fibre Channel over Ethernet (FCoE)/Data Center Bridging (DCB) is present on the port, configuring the device in Virtual Machine Queue (VMQ) + DCB mode reduces the number of VMQ VPorts available for guest OSes.
NOTES: l l This applies only to virtual NICs created on a team or VLAN. Virtual NICs created on a physical adapter do not require these steps. Receive Load Balancing (RLB) is not supported in Hyper-V. Disable RLB when using Hyper-V. 1. Use Intel® PROSet to create the team or VLAN. 2. Open the Network Control Panel. 3. Open the team or VLAN. 4. On the General Tab, uncheck all of the protocol bindings and click OK. 5. Create the virtual NIC.
Each Intel® Ethernet Adapter has a pool of virtual ports that are split between the various features, such as VMQ Offloading, SR-IOV, Data Center Bridging (DCB), and Fibre Channel over Ethernet (FCoE). Increasing the number of virtual ports used for one feature decreases the number available for other features. On devices that support it, enabling DCB reduces the total pool available for other features to 32. Enabling FCoE further reduces the total pool to 24.
SR-IOV architecture includes two functions: l l Physical Function (PF) is a full featured PCI Express function that can be discovered, managed and configured like any other PCI Express device. Virtual Function (VF) is similar to PF but cannot be configured and only has the ability to transfer data in and out. The VF is assigned to a Virtual Machine. NOTES: l l SR-IOV must be enabled in the BIOS. In Windows Server 2012, SR-IOV is not supported with teaming and VLANS.
iWARP (Internet Wide Area RDMA Protocol) Remote Direct Memory Access, or RDMA, allows a computer to access another computer's memory without interacting with either computer's operating system data buffers, thus increasing networking speed and throughput. Internet Wide Area RDMA Protocol (iWARP) is a protocol for implementing RDMA across Internet Protocol networks. Microsoft* Windows* provides two forms of RDMA: Network Direct (ND) and Network Direct Kernel (NDK).
1. Create a directory from which to install the iWARP files. For example, C:\Nano\iwarp. 2. Copy the following files into your new directory: l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wb.dll l \Disk\APPS\PROSETDX\Winx64\DRIVERS\i40wbmsg.dll l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.cat l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.inf l \Disk\APPS\PROSETDX\Winx64\DRIVERS\indv2.sys 3.
Installing the Adapter Select the Correct Slot One open PCI-Express slot, x4, x8, or x16, depending on your adapter. NOTE: Some systems have physical x8 PCI Express slots that actually only support lower speeds. Please check your system manual to identify the slot. Insert the Adapter into the Computer 1. If your computer supports PCI Hot Plug, see your computer documentation for special installation instructions. 2. Turn off and unplug your computer. Then remove the cover.
Connect the RJ-45 Network Cable Connect the RJ-45 network cable as shown: Type of cabling to use: l 10GBASE-T on Category 6, Category 6a, or Category 7 wiring, twisted 4-pair copper: l Length is 55 meters max for Category 6. l Length is 100 meters max for Category 6a. l Length is 100 meters max for Category 7.
l l The adapter must be connected to a compatible link partner, preferably set to auto-negotiate speed and duplex for Intel gigabit adapters. Intel Gigabit and 10 Gigabit Server Adapters using copper connections automatically accommodate either MDI or MDI-X connections. The auto-MDI-X feature of Intel gigabit copper adapters allows you to directly connect two adapters without using a cross-over cable. Connect the Fiber Optic Network Cable CAUTION: The fiber optic ports contain a Class 1 laser device.
l l 10GBASE-SR/LC on 850 nanometer optical fiber: l Utilizing 50 micron multimode, length is 300 meters max. l Utilizing 62.5 micron multimode, length is 33 meters max. 1000BASE-SX/LC on 850 nanometer optical fiber: l Utilizing 50 micron multimode, length is 550 meters max. l Utilizing 62.5 micron multimode, length is 275 meters max. Supported SFP+ and QSFP+ Modules Adapters Based on the 710 Series of Controllers For information on supported media, see the following link: http://www.intel.
IT Intel DUAL RATE 1G/10G SFP+ SR (bailed) AFBR-703SDDZIN1 Intel DUAL RATE 1G/10G SFP+ LR (bailed) FTLX1471D3BCVIT Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDZIN2 Intel DUAL RATE 1G/10G SFP+ LR (bailed) AFCT-701SDDZIN1 TRIPLE RATE 1G/10G/40G QSFP+ SR (bailed) (40G not supported on 82599) E40GQSFPSR LR Modules QSFP Modules Intel The following is a list of 3rd party SFP+ modules that have received some testing. Not all modules are applicable to all devices.
Avago 1000BASE-T SFP ABCU-5710RZ HP 1000BASE-SX SFP 453153-001 82598-Based Adapters NOTES: l l Intel® Network Adapters that support removable optical modules only support their original module type (i.e., the Intel® 10 Gigabit SR Dual Port Express Module only supports SR optical modules). If you plug in a different type of module, the driver will not load. 82598-Based adapters support all passive direct attach cables that comply with SFF-8431 v4.1 and SFF-8472 v10.4 specifications.
Tyco 10m - Twin-ax cable 1-2032237-1 THIRD PARTY OPTIC MODULES AND CABLES REFERRED TO ABOVE ARE LISTED ONLY FOR THE PURPOSE OF HIGHLIGHTING THIRD PARTY SPECIFICATIONS AND POTENTIAL COMPATIBILITY, AND ARE NOT RECOMMENDATIONS OR ENDORSEMENT OR SPONSORSHIP OF ANY THIRD PARTY'S PRODUCT BY INTEL. INTEL IS NOT ENDORSING OR PROMOTING PRODUCTS MADE BY ANY THIRD PARTY AND THE THIRD PARTY REFERENCE IS PROVIDED ONLY TO SHARE INFORMATION REGARDING CERTAIN OPTIC MODULES AND CABLES WITH THE ABOVE SPECIFICATIONS.
l Change the primary adapter designator. l Add a new adapter to an existing team and make the new adapter the primary adapter. l Remove the primary adapter from the system and replace it with a different type of adapter. NOTE: To replace an existing SLA-teamed adapter in a Hot Plug slot, first unplug the adapter cable. When the adapter is replaced, reconnect the cable.
Microsoft* Windows* Installation and Configuration Installing Windows Drivers and Software NOTE: To successfully install or uninstall the drivers or software, you must have administrative privileges on the computer completing installation. Install the Drivers NOTES: l l l This will update the drivers for all supported Intel® network adapters in your system.
1. Identify which drivers to inject into the operating system. 2. Create a directory from which to install the drivers. For example, C:\Nano\Drivers 3. Copy the appropriate drivers for the operating system and hardware. For example, "copy D:\PROXGB\Winx64\NDIS65\*.* c:\Nano\Drivers /y" 4. If you are using the New-NanoServerImage module, use the above path for the -DriversPath parameter. For example, "New-NanoServerImage ...-DriversPath C:\Nano\Drivers" 5. If you are using DISM.
1. On the autorun, click Install Base Drivers and Software. NOTE: You can also run setup64.exe from the files downloaded from Customer Support. 2. Proceed with the installation wizard until the Custom Setup page appears. 3. Select the features to install. 4. Follow the instructions to complete the installation. If Intel PROSet for Windows Device Manager was installed without ANS support, you can install support by clicking Install Base Drivers and Software on the autorun, or running setup64.
Parameter Definition "1", install Intel PROSet feature. The DMIX property requires BD=1. NOTE: If DMIX=0, ANS will not be installed. If DMIX=0 and Intel PROSet, ANS, and FCoE are already installed, Intel PROSet, ANS, and FCoE will be uninstalled. FCOE Fibre Channel over Ethernet "0", do not install FCoE (default). If FCoE is already installed, it will be uninstalled. "1", install FCoE. The FCoE property requires DMIX=1.
Parameter Definition /l[i|w|e|a] /l --- log file option for PROSet installation. Following are log switches: /uninstall i log status messages. w log non-fatal warnings. e log error messages. a log the start of all actions. Uninstall Intel PROSet and drivers. /x NOTES: l l l You must include a space between parameters. If you specify a path for the log file, the path must exist. If you do not specify a complete path, the install log will be created in the current directory.
1. How to install the base driver: D:\DxSetup.exe DMIX=0 ANS=0 2. How to install the base driver using the logging option: D:\DxSetup.exe /l C:\installBD.log DMIX=0 ANS=0 3. How to install Intel PROSet and ANS silently: D:\DxSetup.exe DMIX=1 ANS=1 /qn 4. How to install Intel PROSet without ANS silently: D:\DxSetup.exe DMIX=1 ANS=0 /qn 5. How to install components but deselect ANS: D:\DxSetup.exe DMIX=1 ANS=0 /qn /liew C:\install.log The /liew log option provides a log file for the Intel PROSet installation.
Option Description SetupBD Installs and/or updates the driver(s) and displays the GUI. SetupBD /s Installs and/or updates the driver(s) silently. SetupBD /s /r Installs and/or updates the driver(s) silently and forces a reboot. SetupBD /s /r /nr Installs and/or updates the driver(s) silently and forces a reboot (/nr is ignored). Other information You can use the /r and /nr switches only with a silent install (i.e. with the "/s" option).
l Intel® 82552 10/100 Network Connection l Intel® 82567V-3 Gigabit Network Connection l Intel® X552 10G Ethernet devices l Intel® X553 10G Ethernet devices l l Any platform with a System on a Chip (SoC) processor that includes either a server controller (designated by an initial X, such as X552) or both a server and client controller (designated by an initial I, such as I218) Devices based on the Intel® Ethernet Controller X722 Link Speed tab The Link Speed tab allows you to change the adapter's
NOTES: l Although some adapter property sheets (driver property settings) list 10 Mbps and 100 Mbps in full or half duplex as options, using those settings is not recommended. l Only experienced network administrators should force speed and duplex manually. l You cannot change the speed or duplex of Intel adapters that use fiber cabling. Intel 10 Gigabit adapters that support 1 gigabit speed allow you to configure the speed setting.
Default Disabled Range l Enabled l Disabled Direct Memory Access (DMA) Coalescing DMA (Direct Memory Access) allows the network device to move packet data directly to the system's memory, reducing CPU utilization. However, the frequency and random intervals at which packets arrive do not allow the system to enter a lower power state. DMA Coalescing allows the NIC to collect packets before it initiates a DMA event.
l RX & TX Enabled Gigabit Master Slave Mode Determines whether the adapter or link partner is designated as the master. The other device is designated as the slave. By default, the IEEE 802.3ab specification defines how conflicts are handled. Multi-port devices such as switches have higher priority over single port devices and are assigned as the master. If both devices are multi-port devices, the one with higher seed bits becomes the master. This default setting is called "Hardware Default.
l Medium l Low l Minimal l Off IPv4 Checksum Offload This allows the adapter to compute the IPv4 checksum of incoming and outgoing packets. This feature enhances IPv4 receive and transmit performance and reduces CPU utilization. With Offloading off, the operating system verifies the IPv4 checksum. With Offloading on, the adapter completes the verification for the operating system.
Default Disabled Range Disabled (1514), 4088, or 9014 bytes. (Set the switch 4 bytes higher for CRC, plus 4 bytes if using VLANs.) NOTES: l l l Jumbo Packets are supported at 10 Gbps and 1 Gbps only. Using Jumbo Packets at 10 or 100 Mbps may result in poor performance or loss of link. End-to-end hardware must support this capability; otherwise, packets will be dropped. Intel adapters that support Jumbo Packets have a frame size limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes.
NOTE: In a team, Intel PROSet uses either: l l The primary adapter's permanent MAC address if the team does not have an LAA configured, or The team's LAA if the team has an LAA configured. Intel PROSet does not use an adapter's LAA if the adapter is the primary adapter in a team and the team has an LAA. Log Link State Event This setting is used to enable/disable the logging of link state changes.
Network Virtualization using Generic Routing Encapsulation (NVGRE) Network Virtualization using Generic Routing Encapsulation (NVGRE) increases the efficient routing of network traffic within a virtualized or cloud environment. Some Intel® Ethernet Network devices perform Network Virtualization using Generic Routing Encapsulation (NVGRE) processing, offloading it from the operating system. This reduces CPU utilization.
Range l Priority & VLAN Disabled l Priority Enabled l VLAN Enabled l Priority & VLAN Enabled Quality of Service Quality of Service (QoS) allows the adapter to send and receive IEEE 802.3ac tagged frames. 802.3ac tagged frames include 802.1p priority-tagged frames and 802.1Q VLAN-tagged frames. In order to implement QoS, the adapter must be connected to a switch that supports and is configured for QoS.
Receive Side Scaling When Receive Side Scaling (RSS) is enabled, all of the receive data processing for a particular TCP connection is shared across multiple processors or processor cores. Without RSS all of the processing is performed by a single processor, resulting in less efficient system cache utilization. RSS can be enabled for a LAN or for FCoE. In the first case, it is called "LAN RSS". In the second, it is called "FCoE RSS". LAN RSS LAN RSS applies to a particular TCP connection.
l 8 and 16 queues are supported on the Intel® 82598-based and 82599-based adapters. NOTES: l l The 8 and 16 queues are only available when PROSet for Windows Device Manager is installed. If PROSet is not installed, only 4 queues are available. Using 8 or more queues will require the system to reboot. NOTE: Not all settings are available on all adapters. LAN RSS and Teaming l If RSS is not enabled for all adapters in a team, RSS will be disabled for the team.
port platform configurations. Since all ports share the same default installation directives (the .inf file, etc.), the FCoE queues for every port will be associated with the same set of NUMA CPUs which may result in CPU contention. The software exporting these tuning options defines a NUMA Node to be equivalent to an individual processor (socket). Platform ACPI information presented by the BIOS to the operating system helps define the relation of PCI devices to individual processors.
l FCoE NUMA Node Count = 1 l FCoE Starting NUMA Node = 2 l FCoE Starting Core Offset = 0 This example highlights the fact that platform architectures can vary in the number of PCI buses and where they are attached. The figures below show two simplified platform architectures. The first is the older common FSB style architecture in which multiple CPUs share access to a single MCH and/or ESB that provides PCI bus and memory connectivity.
Determining Active Queue Location The user of these performance options will want to determine the affinity of FCoE queues to CPUs in order to verify their actual effect on queue allocation. This is easily done by using a small packet workload and an I/O application such as IoMeter. IoMeter monitors the CPU utilization of each CPU using the built-in performance monitor provided by the operating system. The CPUs supporting the queue activity should stand out.
Range l Disabled l RX Enabled l TX Enabled l RX & TX Enabled TCP/IP Offloading Options Thermal Monitoring Adapters and network controllers based on the Intel® Ethernet Controller I350 (and later controllers) can display temperature data and automatically reduce the link speed if the controller temperature gets too hot. NOTE: This feature is enabled and configured by the equipment manufacturer. It is not available on all adapters and network controllers. There are no user configurable settings.
With Offloading off, the operating system verifies the UDP checksum. With Offloading on, the adapter completes the verification for the operating system. Default RX & TX Enabled Range l Disabled l RX Enabled l TX Enabled l RX & TX Enabled Wait for Link Determines whether the driver waits for auto-negotiation to be successful before reporting the link state. If this feature is off, the driver does not wait for auto-negotiation. If the feature is on, the driver does wait for autonegotiation.
Virtual LANs Overview NOTES: l You must install the latest Microsoft* Windows* 10 updates before you can create Intel ANS Teams or VLANs on Windows 10 systems. Any Intel ANS Teams or VLANs created with a previous software/driver release on a Windows 10 system will be corrupted and cannot be upgraded. The installer will remove these existing teams and VLANs. l l If you are running Windows 10 Anniversary edition (RS1) you will need to install Intel LAN software v22.1 or newer.
Other Considerations l l l l l l Configuring SR-IOV for improved network security: In a virtualized environment, on Intel® Server Adapters that support SR-IOV, the virtual function (VF) may be subject to malicious behavior. Software-generated frames are not expected and can throttle traffic between the host and the virtual switch, reducing performance. To resolve this issue, configure all SR-IOV enabled ports for VLAN tagging.
NOTES: l l l l l The VLAN ID keyword is supported. The VLAN ID must match the VLAN ID configured on the switch. Adapters with VLANs must be connected to network devices that support IEEE 802.1Q. If you change a setting under the Advanced tab for one VLAN, it changes the settings for all VLANS using that port. In most environments, a maximum of 64 VLANs per network port or team are supported by Intel PROSet. ANS VLANs are not supported on adapters and teams that have VMQ enabled.
l To configure teams in Linux, use Channel Bonding, available in supported Linux kernels. For more information see the channel bonding documentation within the kernel source. l Not all team types are available on all operating systems. l Be sure to use the latest available drivers on all adapters. l l l l l l l Not all Intel devices support Intel ANS or Intel PROSet. Intel adapters that do not support Intel ANS or Intel PROSet may still be included in a team.
8. Click the checkbox of any adapter you want to include in the team, then click Next. 9. Select a teaming mode, then click Next. 10. Click Finish. The Team Properties window appears, showing team properties and settings. Once a team has been created, it appears in the Network Adapters category in the Computer Management window as a virtual adapter. The team name also precedes the adapter name of any adapter that is a member of the team.
Teaming and VLAN Considerations When Replacing Adapters After installing an adapter in a specific slot, Windows treats any other adapter of the same type as a new adapter. Also, if you remove the installed adapter and insert it into a different slot, Windows recognizes it as a new adapter. Make sure that you follow the instructions below carefully. 1. Open Intel PROSet. 2. If the adapter is part of a team remove the adapter from the team. 3. Shut down the server and unplug the power cable. 4.
(Xen or KVM) VMware ESXi ANS Teams and VLANs ANS VLANs ANS VLANs LBFO LBFO ANS VLANs ANS VLANs Supported Adapters Teaming options are supported on Intel server adapters. Selected adapters from other manufacturers are also supported. If you are using a Windows-based computer, adapters that appear in Intel PROSet may be included in a team. NOTE: In order to use adapter teaming, you must have at least one Intel server adapter in your system.
l l Virtual Machine Load Balancing (VMLB) - provides transmit and receive traffic load balancing across Virtual Machines bound to the team interface, as well as fault tolerance in the event of switch port, cable, or adapter failure. This teaming type works with any switch. Static Link Aggregation (SLA) - provides increased transmission and reception throughput in a team of two to eight adapters.
Primary and Secondary Adapters Teaming modes that do not require a switch with the same capabilities (AFT, SFT, ALB (with RLB)) use a primary adapter. In all of these modes except RLB, the primary is the only adapter that receives traffic. RLB is enabled by default on an ALB team. If the primary adapter fails, another adapter will take over its duties. If you are using more than two adapters, and you want a specific adapter to take over if the primary fails, you must specify a secondary adapter.
AFT is the default mode when a team is created. This mode does not provide load balancing. NOTES l l AFT teaming requires that the switch not be set up for teaming and that spanning tree protocol is turned off for the switch port connected to the NIC or LOM on the server. All members of an AFT team must be connected to the same subnet. Switch Fault Tolerance (SFT) Switch Fault Tolerance (SFT) supports only two NICs in a team connected to two different switches.
Virtual Machine Load Balancing Virtual Machine Load Balancing (VMLB) provides transmit and receive traffic load balancing across Virtual Machines bound to the team interface, as well as fault tolerance in the event of switch port, cable, or adapter failure. The driver analyzes the transmit and receive load on each member adapter and balances the traffic across member adapters. In a VMLB team, each Virtual Machine is associated with one team member for its TX and RX traffic.
NOTES: l l l l IEEE 802.3ad teaming requires that the switch be set up for IEEE 802.3ad (link aggregation) teaming and that spanning tree protocol is turned off. Once you choose an aggregator, it remains in force until all adapters in that aggregation team lose link. In some switches, copper and fiber adapters cannot belong to the same aggregator in an IEEE 802.3ad configuration.
Removing Phantom Teams and Phantom VLANs If you physically remove all adapters that are part of a team or VLAN from the system without removing them via the Device Manager first, a phantom team or phantom VLAN will appear in Device Manager. There are two methods to remove the phantom team or phantom VLAN. Removing the Phantom Team or Phantom VLAN through the Device Manager Follow these instructions to remove a phantom team or phantom VLAN from the Device Manager: 1.
Range The range varies with the operating system and adapter. Ultra Low Power Mode When Cable is Disconnected Enabling Ultra Low Power (ULP) mode significantly reduces power consumption when the network cable is disconnected from the device. NOTE: If you experience link issues when two ULP-capable devices are connected back to back, disable ULP mode on one of the devices.
WoL Supported Devices The following adapters support WoL only on Port A: l Intel® Ethernet Server Adapter I350-T2 l Intel® Ethernet Server Adapter I350-T4 l Intel® Ethernet Server Adapter I340-T2 l Intel® Ethernet Server Adapter I340-T4 l Intel® Ethernet Server Adapter I340-F4 l Intel® Gigabit ET2 Quad Port Server Adapter l Intel® PRO/1000 PF Quad Port Server Adapter l Intel® PRO/1000 PT Quad Port LP Server Adapter l Intel® PRO/1000 PT Quad Port Server Adapter l Intel® PRO/1000 PT Dual Po
Configuring with IntelNetCmdlets Module for Windows PowerShell* The IntelNetCmdlets module for Windows PowerShell contains several cmdlets that allow you to configure and manage the Intel® Ethernet Adapters and devices present in your system. For a complete list of these cmdlets and their descriptions, type get-help IntelNetCmdlets at the Windows PowerShell prompt. For detailed usage information for each cmdlet, type get-help at the Windows PowerShell prompt.
Saving and Restoring an Adapter's Configuration Settings The Save and Restore Command Line Tool allows you to copy the current adapter and team settings into a standalone file (such as on a USB drive) as a backup measure. In the event of a hard drive failure, you can reinstate most of your former settings. The system on which you restore network configuration settings must have the same configuration as the one on which the save was performed.
Examples Save Example To save the adapter settings to a file on a removable media device, do the following. 1. Open a Windows PowerShell Prompt. 2. Navigate to the directory where SaveRestore.ps1 is located (generally c:\Program Files\Intel\Wired Networking\DMIX). 3. Type the following: SaveRestore.ps1 –Action Save –ConfigPath e:\settings.txt Restore Example To restore the adapter settings from a file on removable media, do the following: 1. Open a Windows PowerShell Prompt. 2.
Intel Network Drivers for DOS The NDIS2 (DOS) driver is provided solely for the purpose of loading other operating systems -- for example, during RIS or unattended installations. It is not intended as a high-performance driver. You can find adapter drivers, PROTOCOL.INI files, and NET.CFG files in the PRO100\DOS or PRO1000\DOS directory in the download folder. For additional unattended install information, see the text files in the operating system subdirectories under the APPS\SETUP\PUSH directory.
DRIVERNAME This is the only parameter required for all configurations. This parameter is essentially an "instance ID". Each instance of the driver must create a unique instance name, both to satisfy DOS driver requirements, and to make it possible to find the parameters for the instance in the PROTOCOL.INI file. When the driver initializes, it tries to find previously loaded instances of itself. If none are found, the driver calls itself "E1000$", and looks for that name in the PROTOCOL.
Syntax: SLOT = [0x0..0x1FFF] SLOT = [0..8191] Examples: SLOT = 0x1C SLOT = 28 Default: The driver will Auto-Configure if possible. Normal Behavior: The driver uses the value of the parameter to decide which adapter to control.
Syntax: ADVERTISE = [ 1 | 2 | 4 | 8 | 0x20 | 0x2F]: 0x01 = 10 Half, 0x02 = 10 Full, 0x04 = 100 Half, 0x08 = 100 Full, 0x20 = 1000 Full, 0x2F = all rates Example: ADVERTISE = 1 Default: 0x2F (all rates are supported) Normal Behavior: By default all speed/duplex combinations are advertised. Possible Errors: An error message is displayed if the value given is out of range. FLOWCONTROL This parameter, which refers to IEEE 802.
Syntax: UseLastSlot = [ 0 | any other value ] Example: USELASTSLOT = 1 Default: 0 Normal Behavior: 0 = Disabled, any other value = Enabled Possible Errors: None TXLOOPCOUNT This parameter controls the number of times the transmit routine loops while waiting for a free transmit buffer. This parameter can affect Transmit performance.
Data Center Bridging (DCB) for Intel® Network Connections Data Center Bridging provides a lossless data center transport layer for using LANs and SANs in a single unified fabric. Data Center Bridging includes the following capabilities: l Priority-based flow control (PFC; IEEE 802.1Qbb) l Enhanced transmission selection (ETS; IEEE 802.1Qaz) l Congestion notification (CN) l Extensions to the Link Layer Discovery Protocol standard (IEEE 802.
l One of the features is not supported by the switch. l The switch is not advertising the feature. l The switch or host has disabled the feature (this would be an advanced setting for the host). l Disable/enable DCB l Troubleshooting information Hyper-V (DCB and VMQ) NOTE: Configuring a device in the VMQ + DCB mode reduces the number of VMQs available for guest OSes. DCB for Linux DCB is supported on RHEL6 or later or SLES11 SP1 or later. See your operating system documentation for specifics.
for that adapter. These error messages are to notify the administrator of configuration issues that need to be addressed, but do not affect the tagging or flow of iSCSI traffic for that team, unless it explicitly states that the TC Filter has been removed. Linux Configuration In the case of Open Source distributions, virtually all distributions include support for an Open iSCSI Software Initiator and Intel® Ethernet adapters will support them.
Remote Boot Remote Boot allows you to boot a system using only an Ethernet adapter. You connect to a server that contains an operating system image and use that to boot your local system. Flash Images "Flash" is a generic term for nonvolatile RAM (NVRAM), firmware, and option ROM (OROM). Depending on the device, it can be on the NIC or on the system board. Updating the Flash in Microsoft Windows Intel® PROSet for Windows* Device Manager can update the flash on an Intel Ethernet network adapter.
Enable Remote Boot If you have an Intel Desktop Adapter installed in your client computer, the flash ROM device is already available in your adapter, and no further installation steps are necessary. For Intel Server Adapters, the flash ROM can be enabled using the BootUtil utility. For example, from the command line type: BOOTUTIL -E BOOTUTIL -NIC=1 -FLASHENABLE The first line will enumerate the ports available in your system. Choose a port. Then type the second line, selecting the port you wish to enable.
A network boot option will appear in the boot options menu when the UEFI PXE network stack and Intel UEFI network driver have been loaded. Selecting this boot option will initiate a PXE network boot. Configuring UEFI Network Stack for TCP/UDP/MTFTP An IP-based network stack is available to applications requiring IP-based network protocols such as TCP, UDP, or MTFTP.
The following speed and duplex configurations can be selected: l Autonegotiate (recommended) l 100 Mbps, full duplex l 100 Mbps, half duplex l 10 Mbps, full duplex l 10 Mbps, half duplex The speed and duplex setting selected must match the speed and duplex setting of the connecting network port. A speed and duplex mismatch between ports will result in dropped packets and poor network performance. It is recommended to set all ports on a network to autonegotiate.
The Intel Boot Agent supports PXE in pre-boot, Microsoft Windows*, and DOS environments. In each of these environments, a single user interface allows you to configure PXE protocols on Intel® Ethernet Adapters. Configuring the Intel® Boot Agent in a Microsoft Windows Environment If you use the Windows operating system on your client computer, you can use Intel® PROSet for Windows* Device Manager to configure and update the Intel Boot Agent software. Intel PROSet is available through the device manager.
The configuration setup menu shows a list of configuration settings on the left and their corresponding values on the right. Key descriptions near the bottom of the menu indicate how to change values for the configuration settings. For each selected setting, a brief "mini-Help" description of its function appears just above the key descriptions. 1. Highlight the setting you need to change by using the arrow keys. 2.
the screen. This information can be helpful during interaction with Intel Customer Support personnel or your IT team members. For more information about how to interpret the information displayed, refer to Diagnostics Information for Pre-boot PXE Environments. Intel Boot Agent Target/Server Setup Overview For the Intel® Boot Agent software to perform its intended job, there must be a server set up on the same network as the client computer.
Intel Boot Agent cannot continue. the problem. PXE-E01: PCI Vendor and Device IDs do not match! Image vendor and device ID do not match those located on the card. Make sure the correct flash image is installed on the adapter. PXE-E04: Error reading PCI configuration space. The Intel Boot Agent cannot continue. PCI configuration space could not be read. Machine is probably not PCI compliant. The Intel Boot Agent was unable to read one or more of the adapter's PCI configuration registers.
error. PXE-E20: BIOS extended memory copy error. AH == xx Error occurred while trying to copy the image into extended memory. xx is the BIOS failure code. PXE-E51: No DHCP or BOOTP offers received. The Intel Boot Agent did not receive any DHCP or BOOTP responses to its initial request. Please make sure that your DHCP server (and/or proxyDHCP server, if one is in use) is properly configured and has sufficient IP addresses available for lease.
PXE-EC8: !PXE structure was not found in UNDI driver code segment. The Intel Boot Agent could not locate the needed !PXE structure resource. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image. PXE-EC9: PXENV + structure was not found in UNDI driver code segment. The Intel Boot Agent could not locate the needed PXENV+ structure. An incorrect flash image is installed or the image has become corrupted. Try to update the flash image.
Cannot change boot order If you are accustomed to redefining your computer's boot order using the motherboard BIOS setup program, the default settings of the Intel Boot Agent setup program can override that setup. To change the boot sequence, you must first override the Intel Boot Agent setup program defaults. A configuration setup menu appears allowing you to set configuration values for the Intel Boot Agent.
Diagnostics information may include the following items: Item Description PWA Number The Printed Wire Assembly number identifies the adapter's model and version. MAC Address The unique Ethernet address assigned to the device. Memory The memory address assigned by the BIOS for memory-mapped adapter access. I/O The I/O port address assigned by the BIOS for I/O-mapped adapter access. IRQ The hardware interrupt assigned by the system BIOS.
Flags A set of miscellaneous data either read from the adapter EEPROM or calculated by the Boot Agent initialization code. This information varies from one adapter to the next and is only intended for use by Intel customer support. iSCSI Boot Configuration iSCSI Initiator Setup Configuring Intel® Ethernet iSCSI Boot on a Microsoft* Windows* Client Initiator Requirements 1. Make sure the iSCSI initiator system starts the iSCSI Boot firmware.
NOTE: When booting an operating system from a local disk, Intel® Ethernet iSCSI Boot should be disabled for all network ports. Intel® Ethernet iSCSI Boot Port Selection Menu The first screen of the Intel® iSCSI Boot Setup Menu displays a list of Intel® iSCSI Boot-capable adapters. For each adapter port the associated PCI device ID, PCI bus/device/function location, and a field indicating Intel® Ethernet iSCSI Boot status is displayed.
Intel® Ethernet iSCSI Boot Port Specific Setup Menu The port specific iSCSI setup menu has four options: l l l Intel® iSCSI Boot Configuration - Selecting this option will take you to the iSCSI Boot Configuration Setup Menu. The iSCSI Boot Configuration Menu is described in detail in the section below and will allow you to configure the iSCSI parameters for the selected network port. CHAP Configuration - Selecting this option will take you to the CHAP configuration screen.
Listed below are the options in the Intel® iSCSI Boot Configuration Menu: l l l l l l l Use Dynamic IP Configuration (DHCP) - Selecting this checkbox will cause iSCSI Boot to attempt to get the client IP address, subnet mask, and gateway IP address from a DHCP server. If this checkbox is enabled, these fields will not be visible. Initiator Name - Enter the iSCSI initiator name to be used by Intel® iSCSI Boot when connecting to an iSCSI target.
l l Target IP - Enter the target IP address of the iSCSI target in this field. This option is visible if DHCP for iSCSI target is not enabled. Target Port - TCP Port Number. Boot LUN - Enter the LUN ID of the boot disk on the iSCSI target in this field. This option is visible if DHCP for iSCSI target is not enabled. iSCSI CHAP Configuration l Intel® iSCSI Boot supports Mutual CHAP MD5 authentication with an iSCSI target.
The CHAP Authentication feature of this product requires the following acknowledgments: This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). This product includes software written by Tim Hudson (tjh@cryptsoft.com). This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. (http://www.openssl.org/).
l The block size on the target must be 512 bytes l The following operating systems are supported: l l VMware* ESX 5.0, or later l Red Hat* Enterprise Linux* 6.3, or later l SUSE* Enterprise Linux 11SP2, or later l Microsoft* Windows Server* 2012, or later You may be able to access data only within the first 2 TB. NOTE: The Crash Dump driver does not support target LUNs larger than 2TB.
Microsoft* Windows* Microsoft* Windows Server* natively supports OS installation to an iSCSI target without a local disk and also natively supports OS iSCSI boot. See Microsoft's installation instructions and Windows Deployment Services documentation for details. SUSE* Linux Enterprise Server For the easiest experience installing Linux onto an iSCSI target, you should use SLES10 or greater. SLES10 provides native support for iSCSI Booting and installing.
l l l After installing Intel Ethernet iSCSI Boot, the system will not boot to a local disk or network boot device. The system becomes unresponsive after Intel Ethernet iSCSI Boot displays the sign-on banner or after connecting to the iSCSI target. "Intel® iSCSI Remote Boot" does not show up as a boot device in the system BIOS boot device menu.
for Windows. Error message displayed: "DHCP Server not found!" iSCSI was configured to retrieve an IP address from DHCP but no DHCP server responded to the DHCP discovery request. This issue can have multiple causes: l l l l Error message displayed: "PnP Check Structure is invalid!" Error message displayed: "Invalid iSCSI connection information" Error message displayed: "Unsupported SCSI disk block size!" Error message displayed: "ERROR: Could not establish TCP/IP connection with iSCSI target system.
Error message displayed: "ERROR: Login request rejected by iSCSI target system." When installing Linux to NetApp Filer, after a successful target disk discovery, error messages may be seen similar to those listed below. l l l A login request was sent to the iSCSI target system but the login request was rejected. Verify the iSCSI initiator name, target name, LUN number, and CHAP authentication settings match the settings on the iSCSI target system.
iSCSI Known Issues A device cannot be uninstalled if it is configured as an iSCSI primary or secondary port. Disabling the iSCSI primary port also disables the secondary port. To boot from the secondary port, change it to be the primary port. iSCSI Remote Boot: Connecting back-to-back to a target with a Broadcom LOM Connecting an iSCSI boot host to a target through a Broadcom LOM may occasionally cause the connection to fail. Use a switch between the host and target to avoid this.
Moving iSCSI adapter to a different slot: In a Windows* installation, if you move the iSCSI adapter to a PCI slot other than the one that it occupied when the drivers and MS iSCSI Remote Boot Initiator were installed, then a System Error may occur during the middle of the Windows Splash Screen. This issue goes away if you return the adapter to its original PCI slot. We recommend not moving the adapter used for iSCSI boot installation. This is a known OS issue.
Microsoft* Windows Server* 2008 Installation When Performing a WDS Installation If you perform a WDS installation and attempt to manually update drivers during the installation, the drivers load but the iSCSI Target LUN does not display in the installation location list. This is a known WDS limitation with no current fix. You must therefore either perform the installation from a DVD or USB media or inject the drivers on the WDS WinPE image.
Microsoft Windows iSCSI/DCB Known Issues iSCSI over DCB using Microsoft* Windows Server* 2012 iSCSI over DCB (priority tagging) is not possible on the port on which VMSwitch is created. This is by design in Microsoft* Windows Server* 2012. Automatic creation of iSCSI traffic filters for DCB is only supported on networks which make use of IPv4 addressing The iSCSI for Data Center Bridging (DCB) feature uses Quality of Service (QOS) traffic filters to tag outgoing packets with a priority.
New Installation on a Windows Server* system From the Intel downloaded media: Click the FCoE/DCB checkbox to install Intel® Ethernet FCoE Protocol Driver and DCB. The MSI Installer installs all FCoE and DCB components including Base Driver.
NOTES: l l From the Boot Options tab, the user will see the Flash Information button. Clicking on the Flash Information button will open the Flash Information dialog. From the Flash Information dialog, clicking on the Update Flash button allows Intel® iSCSI Remote Boot, Intel® Boot Agent (IBA), Intel® Ethernet FCoE Boot, EFI, and CLP to be written.
7. Follow the instructions for a New Installation on a Windows Server* system. This will install the networking drivers and configure the FCoE drivers to work with the networking drivers. Note that you cannot deselect the FCoE feature. You will be prompted to reboot at the end of the installation process. 8. Windows may prompt you to reboot once again after it returns to the desktop.
NOTES: l l Individually upgrading/downgrading the Intel® Ethernet FCoE driver will not work and may even cause a blue screen; the entire FCoE package must be the same version. Upgrade the entire FCoE package using the Intel® Network Connections installer only. If you uninstalled the Intel® Ethernet Virtual Storage Miniport Driver for FCoE component, just find the same version that you uninstalled and re-install it; or uninstall and then re-install the entire FCoE package.
FCoE Boot Targets Configuration Menu FCoE Boot Targets Configuration: Discover Targets is highlighted by default. If the Discover VLAN value displayed is not what you want, enter the correct value. Highlight Discover Targets and then press Enter to show targets associated with the Discover VLAN value. Under Target WWPN, if you know the desired WWPN you can manually enter it or press Enter to display a list of previously discovered targets.
FCoE Target Selection Menu
Highlight the desired Target from the list and press Enter. Manually fill in the LUN and Boot Order values. Boot Order valid values are 0-4, where 0 means no boot order or ignore the target. A 0 value also indicates that this port should not be used to connect to the target. Boot order values of 1-4 can only be assigned once to target(s) across all FCoE boot-enabled ports. VLAN value is 0 by default. You may do a Discover Targets which will display a VLAN.
Before beginning the configuration, update the adapter's UEFI FCoE Option ROM using the BootUtil tool and the latest BootIMG.FLB file. Use the following command: BOOTUTIL64E.EFI -up=efi+efcoe -nic=PORT -quiet where PORT is the NIC adapter number (in the following example nic=1) NOTE: The UEFI FCoE driver must be loaded before you perform the following steps. Accessing the FCoE Configuration Screen Boot the system into its BIOS and proceed as follows: 1.
1. From the FCoE Configuration menu, select Add an Attempt. All supported ports are displayed. 2. Select the desired port. The FCoE Boot Targets Configuration screen is displayed. 3. Select Discover Targets to automatically discover available targets (alternatively, you can manually enter the fields on the FCoE Boot Targets Configuration screen). The Select from Discovered Targets option displays a list of previously discovered targets. 4. Select Auto-Discovery.
5. Select the desired target from the list. The FCoE Boot Targets Configuration screen is displayed with completed fields for the selected target. 6. Press F10 (Save) to add this FCoE attempt. The FCoE Configuration screen is displayed with the newly added FCoE attempt listed.
Deleting an Existing FCoE Attempt 1. From the FCoE Configuration menu, select Delete Attempts. 2. Select one or more attempts to delete, as shown below (note that the example now shows three added attempts). 3. To delete the selected attempts, choose Commit Changes and Exit. To exit this screen without deleted the selected attempts, choose Discard Changes and Exit.
Changing the Order of FCoE Attempts 1. From the FCoE Configuration menu, select Change Attempt Order. 2. Press the Enter key to display the Change Attempt Order dialog, shown below. 3. Use the arrow keys to change the attempt order. When satisfied, press the Enter key to exit the dialog. The new attempt order is displayed. 4. To save the new attempt order, select Commit Changes and Exit. To exit without saving changes, select Discard Changes and Exit.
4. Use Load Driver to load the FCoE drivers. Browse to the location you chose previously and load the following two drivers in the specified order: 1. Intel(R) Ethernet Setup Driver for FCoE. 2. Intel(R) Ethernet Virtual Storage Miniport Driver for FCoE. Note: the FCoE drivers will block any other network traffic from the FCoE-supported ports until after Step 7. Do not attempt to install an NDIS miniport for any FCoE-supported ports until after Step 7. 5.
Red Hat Enterprise Linux For the easiest experience installing Linux onto an iSCSI target, you should use RHEL 6 or greater. RHEL 6 provides native support for iSCSI Booting and installing. This means that there are no additional steps outside of the installer that are necessary to install to an iSCSI target using an Intel Ethernet Server Adapter. Please refer to the RHEL 6 documentation for instructions on how to install to an iSCSI LUN.
When removing ALB teaming, all FCOE functions fail, all DMIX tabs are grayed out, and both adapter ports fail For ANS teaming to work with Microsoft Network Load Balancer (NLB) in unicast mode, the team's LAA must be set to cluster node IP. For ALB mode, Receive Load Balancing must be disabled. For further configuration details, refer to http://support.microsoft.com/?id=278431 ANS teaming will work when NLB is in multicast mode, as well.
Solutions: Restart the system after setting a port to a boot port and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager. Disable/enable the port in Device Manager after setting it to boot and before adding the Hyper V role. The port does not appear in the list of virtualizable ports in the Hyper V Virtual network manager.
Troubleshooting Common Problems and Solutions There are many simple, easy-to-fix problems related to network problems. Review each one of these before going further. l l Check for recent changes to hardware, software, or the network that may have disrupted communications. Check the driver software. l l l l l l l l Disable (or unload), then re-enable (reload) the driver or adapter. Check for conflicting settings. Disable advanced settings such as teaming or VLANs to see if it corrects the problem.
l Check your BIOS version and settings. l Use the latest appropriate BIOS for your computer. l Make sure the settings are appropriate for your computer. The following troubleshooting table assumes that you have already reviewed the common problems and solutions. Problem Solution Your computer cannot find the adapter Make sure your adapter slots are compatible for the type of adapter you are using: Diagnostics pass but the connection fails l PCI Express v1.0 (or newer) l PCI-X v2.
Problem Solution Adapter". Make sure the proper (and latest) driver is loaded. Make sure that the link partner is configured to auto-negotiate (or forced to match adapter) Verify that the switch is IEEE 802.3ad-compliant. The link light is on, but communications are not properly established Make sure the proper (and latest) driver is loaded. Both the adapter and its link partner must be set to either autodetect or manually set to the same speed and duplex settings.
Problem Solution PCI-X / PCIe. Multiple Adapters When configuring a multi-adapter environment, you must upgrade all Intel adapters in the computer to the latest software. If the computer has trouble detecting all adapters, consider the following: l l If you enable Wake on LAN* (WoL) on more than two adapters, the Wake on LAN feature may overdraw your system’s auxiliary power supply, resulting in the inability to boot the system and other unpredictable problems.
l l Connection Test: Verifies network connectivity by pinging the DHCP server, WINS server, and gateway. Cable Tests: Provide information about cable properties. NOTE: The Cable Test is not supported on all adapters. The Cable Test will only be available on adapters that support it. l Hardware Tests: Determine if the adapter is functioning properly. NOTE: Hardware tests will fail if the adapter is configured for iSCSI Boot.
DIAGS.EXE runs under MS-DOS* and later compatible operating systems. It will not run from a Windows* Command Prompt within any version of the Microsoft Windows operating system or in any other non-MSDOS operating system. This utility is designed to test hardware operation and confirm the adapter's ability to communicate with another adapter in the same network. It is not a throughput measurement tool. DIAGS can test the adapter whether or not there is a responder present.
Change Test Options The test setup screen allows you to configure and select the specific tests desired. Each option is toggled by moving the cursor with the arrow keys and pressing to change the option. The number of tests is simply entered from the keyboard in the appropriate box. If there is a gap in the menu, that means the test is not supported by your adapter. By default, local diagnostics run automatically, while network diagnostics are disabled.
Indicator Lights The Intel Server and Desktop network adapters feature indicator lights on the adapter backplate that serve to indicate activity and the status of the adapter board. The following tables define the meaning for the possible states of the indicator lights for each adapter board.
Single Port QSFP+ Adapters The Intel® Ethernet Converged Network Adapter XL710-Q1 has the following indicator lights: Label Indication Meaning Green Linked at 40 Gb Yellow Linked at 1/10 Gb Blinking On/OFF Actively transmitting or receiving data Off No link.
Dual Port SFP/SFP+ Adapters The Intel® Ethernet Converged Network Adapter X710-2 has the following indicator lights: Label Indication Meaning Green Linked at 10 Gb Yellow Linked at 1 Gb Blinking On/OFF Actively transmitting or receiving data Off No link. LNK ACT The Intel® 10 Gigabit AF DA Dual Port Server Adapter and Intel® Ethernet Server Adapter X520 series of adapters have the following indicator lights: Label Indication Meaning On Linked to the LAN. Off Not linked to the LAN.
Quad Port SFP/SFP+ Adapters The Intel® Ethernet Converged Network Adapter X710-4 has the following indicator lights: Label Indication Meaning Green Linked at 10 Gb Yellow Linked at 1 Gb Blinking On/OFF Actively transmitting or receiving data Off No link. LNK ACT The Intel® Ethernet Converged Network Adapter X520-4 has the following indicator lights: Label Indication Meaning Green Linked at 10 Gb Yellow Linked at 1 Gb Blinking On/OFF Actively transmitting or receiving data Off No link.
Dual Port Copper Adapters The Intel® Ethernet Converged Network Adapter X550-T2 has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gb. Yellow Linked at 1 Gb. Off Linked at 100 Mbps. Blinking On/Off Actively transmitting or receiving data. Off No link.
The Intel® Ethernet Server Adapter X520-T2 has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gb. Yellow Linked at 1 Gb. Off No link. Blinking On/Off Actively transmitting or receiving data. Off No link.
The Intel® Ethernet Server Adapter I350-T2, I340-T2, PRO/1000 P, PT Dual Port, and Gigabit ET Dual Port Server Adapters have the following indicator lights: Label ACT/LNK 10/100/ 1000 Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link. Off 10 Mbps Green 100 Mbps Yellow 1000 Mbps Orange flashing Identity. Use the "Identify Adapter" button in Intel PROSet to control blinking. See Intel PROSet Help for more information.
The PRO/100+ Dual Port Server adapter (with three LEDs per port) has the following indicator lights: Indication Meaning On The adapter and switch are receiving power; the cable connection between the switch and adapter is good. Off The adapter and switch are not receiving power; the cable connection between the switch and adapter is faulty; or you have a driver configuration problem. Label LNK ACT 100 On or flash- The adapter is sending or receiving network data.
Single Port Copper Adapters The Intel®Ethernet Converged Network Adapter X550-T1 has the following indicator lights: Label Link Indication Meaning Green Linked at 10 Gb. Yellow Linked at 1 Gb. Off Linked at 100 Mbps. Off No link. Blinking On/Off Actively transmitting or receiving data.
The Intel®Ethernet Converged Network Adapter X540-T1 has the following indicator lights: Label Link Indication Meaning Off No link. Green Linked at 10 Gb Yellow Linked at 1 Gb Off No link. Blinking On/Off Actively transmitting or receiving data. Activity The Intel® 10 Gigabit AT Server Adapter has the following indicator lights: Label ACT/LNK 1Gig/10Gig FAN FAIL Indication Meaning Green on The adapter is connected to a valid link partner.
The Intel® PRO/1000 PT Server Adapter has the following indicator lights: Label ACT/LNK 10=OFF 100=GRN 1000=ORG Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link. Off 10 Mbps Green 100 Mbps Orange 1000 Mbps Orange flashing Identity. Use the "Identify Adapter" button in Intel® PROSet to control blinking. See Intel PROSet Help for more information.
The Intel® Gigabit CT2, Gigabit CT, PRO/1000 T, and PRO/1000 MT Desktop Adapters have the following indicator lights: Label ACT/LNK 10/100/ 1000 Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link. Yellow flashing Identity. Use the "Identify Adapter" button in Intel® PROSet to control blinking. See Intel PROSet Help for more information.
The Intel® PRO/1000 T Server Adapter has the following indicator lights: Label not labeled Indication Meaning Flashing Identity. Use the "Identify Adapter" button in Intel PROSet to control blinking. See Intel PROSet Help for more information. On The adapter is connected to a valid link partner. Off No link. On Data is being transmitted or received. Off No data activity.
Quad Port Copper Adapters The Intel® Ethernet Converged Network Adapter X710-T4 has the following indicator lights: Label ACT LNK Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
The Intel® Ethernet Server Adapter I350-T4, I340-T4, Gigabit ET and PRO/1000 PT Quad Port LP Server Adapters have the following indicator lights: Label ACT/LNK 10/100/ 1000 Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link. Green 100 Mbps Yellow 1000 Mbps Orange flashing Identity. Use the "Identify Adapter" button in Intel® PROSet to control blinking. See Intel PROSet Help for more information.
Dual Port Fiber Adapters The Intel® 10 Gigabit XF SR Dual Port Server Adapters has the following indicator lights: Label Indication Meaning On The adapter is connected to a valid link partner. Adapter is actively passing traffic. ACT/LNK Blinking Identity. Use the "Identify Adapter" button in Intel PROSet to control blinking. See Intel PROSet Help for more information. Off No link.
On The adapter is connected to a valid link partner. Adapter is actively passing traffic. ACT/LNK Blinking Identity. Use the "Identify Adapter" button in Intel PROSet to control blinking. See Intel PROSet Help for more information. Off No link. The Intel® PRO/1000 MF and PF Server Adapters have the following indicator lights: Label Indication Meaning On The adapter is connected to a valid link partner. Adapter is actively passing traffic. ACT/LNK Blinking Identity.
The Intel® PRO/1000 XF Server Adapter has the following indicator lights: Label Indication Meaning On The adapter is connected to a valid link partner. Off No link. On Data is being transmitted or received. Off No data activity. Flashing Identity. Use the "Identify Adapter" button in Intel PROSet to control blinking. See Intel PROSet Help for more information.
Quad Port Fiber Adapters The Intel® Ethernet Server Adapter I340-F4 has the following indicator lights: Label GRN=1G Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
The Intel® PRO/1000 PF Quad Port Server Adapter has the following indicator lights: Label ACT/LNK Indication Meaning Green on The adapter is connected to a valid link partner. Green flashing Data activity Off No link.
Known Issues NOTE: iSCSI Known Issues and FCoE Known Issues are located in their own sections of this manual. Lost Data Packets caused by Frequent LLDP Packets on an Inactive Port When ports are teamed or bonded together in an active/passive configuration (for example, in a switch fault tolerance team, or a mode 1 bond), the inactive port may send out frequent LLDP packets, which results in lost data packets.
Virtual machine loses link on a Microsoft Windows Server 2012 R2 system On a Microsoft Windows Server 2012 R2 system with VMQ enabled, if you change the BaseRssProcessor setting, then install Microsoft Hyper-V and create one or more virtual machines, the virtual machines may lose link. Installing the April 2014 update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 (2919355) and hotfix 3031598 will resolve the issue. See http://support2.microsoft.com/kb/2919355 and http://support2.
A VLAN Created on an Intel Adapter Must be Removed Before a Multi-Vendor Team Can be Created. In order to create the team, the VLAN must first be removed. Receive Side Scaling value is blank Changing the Receive Side Scaling setting of an adapter in a team may cause the value for that setting to appear blank when you next check it. It may also appear blank for the other adapters in the team. The adapter may be unbound from the team in this situation. Disabling and enabling the team will resolve the issue.
Other Intel 10GbE Network Adapter Known Issues ETS Bandwidth Allocations Don't Match Settings When Jumbo Frames is set to 9K with a 10GbE adapter, a 90%/10% ETS traffic split will not actually be attained on any particular port, despite settings being made on the DCB switch. When ETS is set to a 90%/10% split, an actual observed split of 70%/30% is more likely.
Driver Buffer Overflow Fix The fix to resolve CVE-2016-8105, referenced in Intel SA-00069 https://securitycenter.intel.com/advisory.aspx?intelid=INTEL-SA-00069&languageid=en-fr, is included in this and future versions of the driver. Intel ANS VLANs adversely affect performance Intel ANS VLANs adversely affect the performance of X710 based devices. Use the networking features built into Microsoft Windows Server 2012, or other server management software, to assign VLANs.
Regulatory Compliance Statements FCC Class A Products l Intel® Ethernet Network Adapter XXV710 l Intel® Ethernet Network Adapter XXV710-1 l Intel® Ethernet Network Adapter XXV710-2 l Intel® Ethernet I/O Module XL710-Q1 l Intel® Ethernet I/O Module XL710-Q2 l Intel® Ethernet Server Adapter X550-T2 for OCP l Intel® Ethernet Server Adapter X550-T1 for OCP l Intel® Ethernet Server Bypass Adapter X540-T2 l Intel® Ethernet Converged Network Adapter X540-T2 l Intel® Ethernet Converged Network Ad
FCC Class B Products l Intel® Ethernet Converged Network Adapter X710-2 l Intel® Ethernet Converged Network Adapter X710-4 l Intel® Ethernet Converged Network Adapter X710-T4 l Intel® Ethernet Converged Network Adapter XL710-Q1 l Intel® Ethernet Converged Network Adapter XL710-Q2 l Intel® Ethernet Converged Network Adapter X550-T1 l Intel® Ethernet Converged Network Adapter X550-T2 l Intel® Ethernet Server Adapter X520-1 l Intel® Ethernet Server Adapter X520-2 l Intel® Ethernet SFP+ LR Op
l Intel® PRO/1000 MF Dual Port Server Adapter l Intel® PRO/1000 PF Server Adapter l Intel® PRO/1000 PF Dual Port Server Adapter l Intel® PRO/1000 PF Quad Port Server Adapter l Intel® PRO/100 M Desktop Adapter l Intel® PRO/100 S Desktop Adapter l Intel® PRO/100 S Server Adapter l Intel® PRO/100 S Dual Port Server Adapter Safety Compliance The following safety standards apply to all products listed above.
l CNS13438 (Class B)-2006 – Radiated & Conducted Emissions (Taiwan) (excluding optics) l AS/NZS CISPR 22 – Radiated & Conducted Emissions (Australia/New Zealand) l KN22; KN24 – Korean emissions and immunity l NRRA No. 2012-13 (2012.06.28), NRRA Notice No. 2012-14 (2012.06.
VCCI Class A Statement BSMI Class A Statement KCC Notice Class A (Republic of Korea Only) BSMI Class A Notice (Taiwan) FCC Class B User Information This equipment has been tested and found to comply with the limits for a Class B digital device pursuant to Part 15 of the FCC rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation.
energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications. However, there is no guarantee that interference will not occur in a particular installation.
KCC Notice Class B (Republic of Korea Only)
EU WEEE Logo Manufacturer Declaration European Community Manufacturer Declaration Intel Corporation declares that the equipment described in this document is in conformance with the requirements of the European Council Directive listed below: l Low Voltage Directive 2006/95/EC l EMC Directive2004/108/EC l RoHS Directive 2011/65/EU These products follow the provisions of the European Directive 1999/5/EC.
Dette produkt er i overensstemmelse med det europæiske direktiv 1999/5/EC. Dit product is in navolging van de bepalingen van Europees Directief 1999/5/EC. Tämä tuote noudattaa EU-direktiivin 1999/5/EC määräyksiä. Ce produit est conforme aux exigences de la Directive Européenne 1999/5/EC. Dieses Produkt entspricht den Bestimmungen der Europäischen Richtlinie 1999/5/EC. Þessi vara stenst reglugerð Evrópska Efnahags Bandalagsins númer 1999/5/EC. Questo prodotto è conforme alla Direttiva Europea 1999/5/EC.
China RoHS Declaration Class 1 Laser Products Server adapters listed above may contain laser devices for communication use. These devices are compliant with the requirements for Class 1 Laser Products and are safe in the intended use. In normal operation the output of these laser devices does not exceed the exposure limit of the eye and cannot cause harm.
Customer Support Intel support is available on the web or by phone. Support offers the most up-to-date information about Intel products, including installation instructions, troubleshooting tips, and general product information. Web and Internet Sites Support: http://www.intel.com/support Corporate Site for Network Products: http://www.intel.com/products/ethernet/overview.
Legal Disclaimers INTEL SOFTWARE LICENSE AGREEMENT IMPORTANT - READ BEFORE COPYING, INSTALLING OR USING. Do not copy, install, or use this software and any associated materials (collectively, the "Software") provided under this license agreement ("Agreement") until you have carefully read the following terms and conditions. By copying, installing, or otherwise using the Software, you agree to be bound by the terms of this Agreement.
OEM LICENSE: You may reproduce and distribute the Software only as an integral part of or incorporated in your product, as a standalone Software maintenance update for existing end users of your products, excluding any other standalone products, or as a component of a larger Software distribution, including but not limited to the distribution of an installation image or a Guest Virtual Machine image, subject to these conditions: 1.
Software from Intel, and may contain errors and other problems that could cause data loss, system failures, or other errors. The pre-release Software is provided to you "as-is" and Intel disclaims any warranty or liability to you for any damages that arise out of the use of the pre-release Software.
APPLICABLE LAWS. Claims arising under this Agreement shall be governed by the laws of the State of California, without regard to principles of conflict of laws. You agree that the terms of the United Nations Convention on Contracts for the Sale of Goods do not apply to this Agreement. You may not export the Software in violation of applicable export laws and regulations. Intel is not obligated under any other agreements unless they are in writing and signed by an authorized representative of Intel.
Returning a defective product From North America: Before returning any adapter product, contact Intel Customer Support and obtain a Return Material Authorization (RMA) number by calling +1 916-377-7000. If the Customer Support Group verifies that the adapter product is defective, they will have the RMA department issue you an RMA number to place on the outer package of the adapter product. Intel cannot accept any product without an RMA number on the package.