Cisco SFS InfiniBand Host Drivers User Guide for Linux Release 3.2.0 June 2007 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
CONTENTS Preface vii Audience vii Organization vii Conventions viii Root and Non-root Conventions in Examples Related Documentation ix ix Obtaining Documentation, Obtaining Support, and Security Guidelines CHAPTER 1 About Host Drivers Introduction 1-1 Architecture 1-2 ix 1-1 Supported Protocols IPoIB 1-3 SRP 1-3 SDP 1-3 1-3 Supported APIs 1-4 MVAPICH MPI 1-4 uDAPL 1-4 Intel MPI 1-4 HP MPI 1-4 HCA Utilities and Diagnostics CHAPTER 2 Installing Host Drivers Introduction 1-4 2-1 2-
Contents Subinterfaces 3-2 Creating a Subinterface Associated with a Specific IB Partition 3-3 Removing a Subinterface Associated with a Specific IB Partition 3-4 Verifying IPoIB Functionality IPoIB Performance 3-5 3-6 Sample Startup Configuration File 3-8 IPoIB High Availability 3-8 Merging Physical Ports 3-8 Unmerging Physical Ports 3-9 CHAPTER 4 SCSI RDMA Protocol Introduction 4-1 4-1 Configuring SRP 4-1 Configuring ITLs when Using Fibre Channel Gateway 4-2 Configuring ITLs with Element Manag
Contents CHAPTER 7 MVAPICH MPI 7-1 Introduction 7-1 Initial Setup 7-2 Configuring SSH 7-2 Editing Environment Variables 7-5 Setting Environment Variables in System-Wide Startup Files 7-6 Editing Environment Variables in the Users Shell Startup Files 7-6 Editing Environment Variables Manually 7-7 MPI Bandwidth Test Performance MPI Latency Test Performance 7-7 7-8 Intel MPI Benchmarks (IMB) Test Performance Compiling MPI Programs CHAPTER 8 7-12 HCA Utilities and Diagnostics Introduction 8-1
Contents Cisco SFS InfiniBand Host Drivers User Guide for Linux vi OL-12309-01
Preface This preface describes who should read the Cisco SFS InfiniBand Host Drivers User Guide for Linux, how it is organized, and its document conventions.
Preface Conventions Chapter Title Description Chapter 8 HCA Utilities and Diagnostics Describes the fundamental HCA utilities and diagnostics. Appendix A Acronyms and Abbreviations Defines the acronyms and abbreviations that are used in this publication. Conventions This document uses the following conventions: Convention Description boldface font Commands, command options, and keywords are in boldface. Bold text indicates Chassis Manager elements or text that you must enter as-is.
Preface Root and Non-root Conventions in Examples Notes use the following convention: Note Means reader take note. Notes contain helpful suggestions or references to material not covered in the manual. Cautions use the following convention: Caution Means reader be careful. In this situation, you might do something that could result in equipment damage or loss of data.
Preface Obtaining Documentation, Obtaining Support, and Security Guidelines Cisco SFS InfiniBand Host Drivers User Guide for Linux x OL-12309-01
C H A P T E R 1 About Host Drivers This chapter describes host drivers and includes the following sections: Note • Introduction, page 1-1 • Architecture, page 1-2 • Supported Protocols, page 1-3 • Supported APIs, page 1-4 • HCA Utilities and Diagnostics, page 1-4 For expansions of acronyms and abbreviations used in this publication, see Appendix A, “Acronyms and Abbreviations.
Chapter 1 About Host Drivers Architecture See the “Root and Non-root Conventions in Examples” section on page ix for details about the significance of prompts used in the examples in this chapter. Note Architecture Figure 1-1 displays the software architecture of the protocols and APIs that HCAs support. The figure displays ULPs and APIs in relation to other IB software elements.
Chapter 1 About Host Drivers Supported Protocols Supported Protocols This section describes the supported protocols and includes the following topics: • IPoIB • SRP • SDP Protocol here refers to software in the networking layer in kernel space. IPoIB The IPoIB protocol passes IP traffic over the IB network. Configuring IPoIB requires similar steps to configuring IP on an Ethernet network. SDP relies on IPoIB to resolve IP addresses. (See the “SDP” section on page 1-3.
Chapter 1 About Host Drivers Supported APIs Supported APIs This section describes the supported APIs and includes the following topics: • MVAPICH MPI • uDAPL • Intel MPI • HP MPI API refers to software in the networking layer in user space. MVAPICH MPI MPI is a standard library functionality in C, C++, and Fortran that can be used to implement a message-passing program. MPI allows the coordination of a program running as multiple processes in a distributed memory environment.
C H A P T E R 2 Installing Host Drivers The chapter includes the following sections: Note • Introduction, page 2-1 • Contents of ISO Image, page 2-2 • Installing Host Drivers from an ISO Image, page 2-2 • Uninstalling Host Drivers from an ISO Image, page 2-3 See the “Root and Non-root Conventions in Examples” section on page ix for details about the significance of prompts used in the examples in this chapter. Introduction The Cisco Linux IB driver is delivered as an ISO image.
Chapter 2 Installing Host Drivers Contents of ISO Image Contents of ISO Image The ISO image contains the following directories and files: • docs/ This directory contains the related documents. • tsinstall This is the installation script. • redhat/ This directory contains the binary RPMs for Red Hat Enterprise Linux. • suse/ This directory contains the binary RPMs for SUSE Linux Enterprise Server.
Chapter 2 Installing Host Drivers Uninstalling Host Drivers from an ISO Image topspin-ib-mpi-rhel4-3.2.0-136.x86_64 (MPI libraries, source code, docs, etc) topspin-ib-mod-rhel4-2.6.9-34.ELsmp-3.2.0-136.x86_64 (kernel modules) installing 100% ############################################################### Upgrading HCA 0 HCA.LionMini.A0 to firmware build 3.2.0.136 New Node GUID = 0005ad0000200848 New Port1 GUID = 0005ad0000200849 New Port2 GUID = 0005ad000020084a Programming HCA firmware...
Chapter 2 Installing Host Drivers Uninstalling Host Drivers from an ISO Image Cisco SFS InfiniBand Host Drivers User Guide for Linux 2-4 OL-12309-01
C H A P T E R 3 IP over IB Protocol This chapter describes IP over IB protocol and includes the following sections: Note • Introduction, page 3-1 • Manually Configuring IPoIB for Default IB Partition, page 3-2 • Subinterfaces, page 3-2 • Verifying IPoIB Functionality, page 3-5 • IPoIB Performance, page 3-6 • Sample Startup Configuration File, page 3-8 • IPoIB High Availability, page 3-8 See the “Root and Non-root Conventions in Examples” section on page ix for details about the significanc
Chapter 3 IP over IB Protocol Manually Configuring IPoIB for Default IB Partition Manually Configuring IPoIB for Default IB Partition To manually configure IPoIB for the default IB partition, perform the following steps: Step 1 Log in to your Linux host.
Chapter 3 IP over IB Protocol Subinterfaces Creating a Subinterface Associated with a Specific IB Partition To create a subinterface associated with a specific IB partition, perform the following steps: Step 1 Create a partition on an IB SFS. Alternatively, you can choose to create the partition of the IB interface on the host first, and then create the partition for the ports on the IB SFS.
Chapter 3 IP over IB Protocol Subinterfaces NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Verify that you see the ib0.8002 output. Step 6 Configure the new interface just as you would the parent interface. (See the “Manually Configuring IPoIB for Default IB Partition” section on page 3-2.
Chapter 3 IP over IB Protocol Verifying IPoIB Functionality TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1024 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:378 errors:0 dropped:0 overruns:0 frame:0 TX packets:378 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:45730 (44.6 KiB) TX bytes:45730 (44.
Chapter 3 IP over IB Protocol IPoIB Performance IPoIB Performance This section describes how to verify IPoIB performance by running the Bandwidth test and the Latency test. These tests are described in detail at the following URL: http://www.netperf.org/netperf/training/Netperf.html To verify IPoIB performance, perform the following steps: Step 1 Download Netperf from the following URL: http://www.netperf.org/netperf/NetperfPage.html Step 2 Compile Netperf by following the instructions at http://www.
Chapter 3 IP over IB Protocol IPoIB Performance Step 5 Run the Netperf Latency test. Run the test once, and stop the server so that it does not repeat the test. The following example shows how to run the Latency test, and then stop the Netperf server: host2$ netperf -H 192.168.0.1 -c -C -t TCP_RR -- -r 1,1 TCP REQUEST/RESPONSE TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 192.168.0.1 (192.168.0.1) port 0 AF_INET Local /Remote Socket Size Request Resp. Elapsed Trans. CPU CPU S.dem S.
Chapter 3 IP over IB Protocol Sample Startup Configuration File Sample Startup Configuration File IP addresses that are configured manually are not persistent across reboots. You must use a configuration file to configure IPoIB when the host boots. Two sample configurations are included in this section. The following sample configuration shows an example file named ifcfg-ib0 that resides on a Linux host in /etc/sysconfig/networks-scripts/ on RHEL3 and RHEL4.
Chapter 3 IP over IB Protocol IPoIB High Availability Step 3 Take the interfaces offline. You cannot merge interfaces until you bring them down. The following example shows how to take the interfaces offline: host1# ifconfig ib0 down host1# ifconfig ib1 down Step 4 Merge the two ports into one virtual IPoIB high availability port by entering the ipoibcfg merge command with the IB identifiers of the first and the second IB ports on the HCA.
Chapter 3 IP over IB Protocol IPoIB High Availability Step 3 Display the available interfaces by entering the ipoibcfg list command.
C H A P T E R 4 SCSI RDMA Protocol This chapter describes SCSI RDMA protocol and includes the following sections: Note • Introduction, page 4-1 • Configuring SRP, page 4-1 • Verifying SRP, page 4-7 See the “Root and Non-root Conventions in Examples” section on page ix for details about the significance of prompts used in the examples in this chapter.
Chapter 4 SCSI RDMA Protocol Configuring SRP This section contains information on how to configure your IB fabric to connect an SRP host to a SAN and includes the following topics: Note • Configuring ITLs when Using Fibre Channel Gateway, page 4-2 • Configuring SRP Host, page 4-6 If you intend to manage your environment with Cisco VFrame software, do not configure ITLs. Configuring ITLs when Using Fibre Channel Gateway This section describes how to configure ITLs when using Fibre Channel gateway.
Chapter 4 SCSI RDMA Protocol Configuring SRP Step 3 Bring up the Fibre Channel gateways on your SFS, by performing the following steps: a. Launch Element Manager. b. Double-click the Fibre Channel gateway card that you want to bring up. The Fibre Channel Card window opens. c. Click the Up radio button in the Enable/Disable Card field, and then click Apply. d. (Optional) Repeat this process for additional gateways. The Fibre Channel gateway automatically discovers all attached storage.
Chapter 4 SCSI RDMA Protocol Configuring SRP Configuring ITLs with Element Manager while Global Policy Restrictions Apply This section describes how to configure ITLs with Element Manager while global policy restrictions apply. These instructions apply to environments where the portmask policy and LUN masking policy are both restricted. To verify that you have restricted your policies, enter the show fc srp-global command at the CLI prompt.
Chapter 4 SCSI RDMA Protocol Configuring SRP Step 9 Click the Next > button. The Define New SRP Host window displays a recommended WWNN for the host and recommended WWPNs that represent the host on all existing and potential Fibre Channel gateway ports. Note Although you can manually configure the WWNN or WWPNs, we recommend that you use the default values to avoid conflicts. Step 10 Click Finish. The new host appears in the SRP Hosts display.
Chapter 4 SCSI RDMA Protocol Configuring SRP Configuring SRP Host This section describes how to configure the SRP host. The SRP host driver exposes a Fibre Channel target (identified by a WWPN) as a SCSI target to the Linux SCSI mid-layer. In turn, the mid-layer creates Linux SCSI devices for each LUN found behind the target. The SRP host driver provides failover and load balancing for multiple IB paths for a given target.
Chapter 4 SCSI RDMA Protocol Verifying SRP Verifying SRP This section describes how to verify SRP functionality and verify SRP host-to-storage connections with the Element Manager GUI and includes the following sections: • Verifying SRP Functionality, page 4-7 • Verifying with Element Manager, page 4-8 Verifying SRP Functionality To verify SRP functionality, perform the following steps: Step 1 Log in to your SRP host. Step 2 Create a disk partition.
Chapter 4 SCSI RDMA Protocol Verifying SRP 512000 inodes, 1023996 blocks 51199 blocks (5.
C H A P T E R 5 Sockets Direct Protocol This chapter describes the Sockets Direct Protocol and includes the following sections: Note • Introduction, page 5-1 • Configuring IPoIB Interfaces, page 5-1. • Converting Sockets-Based Application, page 5-2 • SDP Performance, page 5-4 • Netperf Server with IPoIB and SDP, page 5-6 See the “Root and Non-root Conventions in Examples” section on page ix for details about the significance of prompts used in the examples in this chapter.
Chapter 5 Sockets Direct Protocol Converting Sockets-Based Application Converting Sockets-Based Application This section describes how to convert sockets-based applications. You can convert your sockets-based applications to use SDP instead of TCP by using one of two conversion types.
Chapter 5 Sockets Direct Protocol Converting Sockets-Based Application Log Statement This section describes the log statement. The log directive allows the user to specify which debug and error messages are sent and where they are sent. The log statement format is as follows: log [destination stderr | syslog | file filename] [min-level 1-9] Command Description destination Defines the destination of the log messages. stderr Forwards messages to the STDERR.
Chapter 5 Sockets Direct Protocol SDP Performance shared This expression enables the user to match a server-bind request and then listen and accept incoming connections on both TCP and SDP protocols. program This expression enables the user to match the program name. The ip_port matches against an IP address, prefix length, and port range. The format is as follows: ip_addr[/prefix_length][:start_port[-end_port]] The prefix length is optional and missing defaults to /32 (length of one host).
Chapter 5 Sockets Direct Protocol SDP Performance Socket Socket Size Size bytes bytes 87380 16384 Message Size bytes Elapsed Time secs. 65536 10.00 Throughput 10^6bits/s Send local % S Recv remote % S Send local us/KB Recv remote us/KB 6601.82 23.79 21.37 1.181 1.061 The following list describes the parameters for the netperf command: -H Where to find the server 192.168.0.
Chapter 5 Sockets Direct Protocol Netperf Server with IPoIB and SDP The notable performance values in the example above are as follows: Client CPU utilization is 6.26 percent of client CPU. Server CPU utilization is 7.22 percent of server CPU. Latency is 18.01 microseconds. Latency is calculated as follows: (1 / Transaction rate per second) / 2 * 1,000,000 = one-way average latency in microseconds Step 7 To end test, shutdown the Netperf server.
Chapter 5 Sockets Direct Protocol Netperf Server with IPoIB and SDP The following list describes parameters for the netperf command: -H Where to find the server 192.168.0.1 IPoIB IP address -c Client CPU utilization -C Server CPU utilization -- Separates the global and test-specific parameters -m The message size, which is 65536 in the example above The notable performance values in the example above are as follows: Throughput is 6.60 gigabits per second. Client CPU utilization is 23.
Chapter 5 Sockets Direct Protocol Netperf Server with IPoIB and SDP Cisco SFS InfiniBand Host Drivers User Guide for Linux 5-8 OL-12309-01
C H A P T E R 6 uDAPL This chapter describes uDAPL and includes the following sections: Note • Introduction, page 6-1 • uDAPL Test Performance, page 6-1 • Compiling uDAPL Programs, page 6-4 See the “Root and Non-root Conventions in Examples” section on page ix for details about the significance of prompts used in the examples in this chapter. Introduction uDAPL defines a single set of user-level APIs for all RDMA-capable transports.
Chapter 6 uDAPL uDAPL Throughput Test Performance The Throughput test measures RDMA WRITE throughput using uDAPL. To perform a uDAPL Throughput test performance, perform the following steps: Step 1 Start the Throughput test on the server host. The syntax for the server is as follows: /usr/local/topspin/bin/thru_server.x device_name RDMA_size iterations batch_size The following example shows how to start the Throughput test on the server host: host1$ /usr/local/topspin/bin/thru_server.
Chapter 6 uDAPL Step 3 View the Throughput test results from the server. The following example shows the Throughput test results: Created an EP with ep_handle = 0x2a95f8a300 queried max_recv_dtos = 256 queried max_request_dtos = 1024 Accept issued... Received an event on ep_handle = 0x2a95f8a300 Context = 29a Connected! received rmr_context = 1b3b78 target_address = 95e3a000 segment_length = 40000 Sent 7759.462 Mb in 1.0 seconds throughput = 7741.811 Mb/sec Sent 7759.462 Mb in 1.
Chapter 6 uDAPL host2$ /usr/local/topspin/bin/lat_client.x ib0 192.168.0.1 200000 1 0 Step 3 • ib0 is the name of the device. • 192.168.0.1 is the IPoIB address of the server host. • 200000 is the number of RDMAs to perform for the test. • 1 is the size in bytes of the RDMA WRITE. • 0 is a flag specifying whether polling or event should be used. 0 signifies polling, and 1 signifies events. View the Latency results.
C H A P T E R 7 MVAPICH MPI The chapter describes MVAPICH MPI and includes the following sections: • Introduction, page 7-1 • Initial Setup, page 7-2 • Configuring SSH, page 7-2 • Editing Environment Variables, page 7-5 • MPI Bandwidth Test Performance, page 7-7 • MPI Latency Test Performance, page 7-8 • Intel MPI Benchmarks (IMB) Test Performance, page 7-9 • Compiling MPI Programs, page 7-12 Introduction MPI is a standard library functionality in C, C++, and Fortran that is used to implem
Chapter 7 MVAPICH MPI Initial Setup Initial Setup This section describes the initial MPI setup. MPI can be used with either IPoIB or Ethernet IP addresses. The drivers for MPI are automatically loaded at boot time if IPoIB or SDP is loaded. If neither IPoIB nor SRP are used, the MPI drivers can still be loaded at boot time. To enable loading MPI driver at boot time, run chkconfig ts_mpi on. The drivers for MPI can be loaded manually with service ts_mpi start.
Chapter 7 MVAPICH MPI Configuring SSH To configure SSH, perform the following steps: Step 1 Log in to the host that you want to configure as the local host, host1. The following example shows how to log in to the host: login: username Password: password host1$ Note Step 2 Your exact login output is slightly different and could display information such as the day and the last time you logged in. Generate a public/private DSA key pair by entering the ssh-keygen -t dsa command.
Chapter 7 MVAPICH MPI Configuring SSH Step 5 Change into the .ssh directory that you created. The following example shows how to change into the .ssh directory: host1$ cd .ssh Step 6 Copy the public key that was just generated to the authorized keys file. The following example shows how to copy the public key to authorized keys file: host1$ cp id_dsa.pub authorized_keys host1$ chmod 0600 authorized_keys Step 7 Test your SSH connection to host1.
Chapter 7 MVAPICH MPI Editing Environment Variables Step 9 Return to host1 and copy the authorized keys file from Step 6 to the directory that you created in Step 8. The following example shows how to return to host1 and copy the authorized keys file to the directory that was created: host1$ scp authorized_keys host2:.ssh Note If this is the first time you have logged in to host2 using SSH or SCP, you see an authenticity message for host2. Type yes to continue connecting.
Chapter 7 MVAPICH MPI Editing Environment Variables Setting Environment Variables in System-Wide Startup Files This method is used to set a system-wide default for which MPI implementation is used. This method is the easiest for end users; users who log in automatically have MPI implementations set up for them without executing any special commands to find MPI executables, such as mpirun or mpicc. The example below describes how to set up MVAPICH in system-wide startup files.
Chapter 7 MVAPICH MPI MPI Bandwidth Test Performance Editing Environment Variables Manually Typically, you edit environment variables manually when it is necessary to run temporarily with a given MPI implementation. For example, when it is not desirable to change the default MPI implementation, you can edit the environment variables manually and set MVAPICH to be used for the shell where the variables are set. The following example shows how to create a setup that uses MVAPICH in a single shell.
Chapter 7 MVAPICH MPI MPI Latency Test Performance When the test completes successfully, you see output that is similar to the following: # OSU MPI Bandwidth Test (Version 2.2) # Size Bandwidth (MB/s) 1 3.352541 2 6.701571 4 10.738255 8 20.703599 16 39.875389 32 75.128393 64 165.294592 128 307.507508 256 475.587808 512 672.716075 1024 829.044908 2048 932.896797 4096 1021.088303 8192 1089.791931 16384 1223.756784 32768 1305.416744 65536 1344.005127 131072 1360.208200 262144 1373.802207 524288 1372.
Chapter 7 MVAPICH MPI Intel MPI Benchmarks (IMB) Test Performance • The name of the hostfile • The latency executable name The following example shows how to run the MVAPICH MPI Latency test: host1$ mpirun_rsh -np 2 -hostfile /tmp/hostfile \ /usr/local/topspin/mpi/mpich/bin/osu_latency When the test completes successfully, you see output that is similar to the following: # OSU MPI Latency Test (Version 2.2) # Size Latency (us) 0 2.83 1 2.85 2 2.86 4 2.94 8 2.97 16 2.97 32 3.08 64 3.11 128 3.90 256 4.
Chapter 7 MVAPICH MPI Intel MPI Benchmarks (IMB) Test Performance When your installation is not working properly, the IMB test might lead to VAPI_RETRY_EXEC errors. You should check the output of the PingPong, PingPing, and Sendrecv bandwidth measurements against known good results on similar architectures and devices. Low-bandwidth values, especially at high numbers of nodes, might indicate either severe congestion or functionality problems within the IB fabric.
Chapter 7 MVAPICH MPI Intel MPI Benchmarks (IMB) Test Performance When the test completes successfully, you see output similar to the following: #--------------------------------------------------# Intel (R) MPI Benchmark Suite V2.3, MPI-1 part #--------------------------------------------------# Date : Thu Oct 12 17:48:21 2006 # Machine : x86_64# System : Linux # Release : 2.6.9-42.
Chapter 7 MVAPICH MPI Compiling MPI Programs Compiling MPI Programs This section describes how to compile MPI programs. Compiling MPI applications from source code requires adding several compiler and linker flags. MVAPICH MPI provides wrapper compilers that add all appropriate compiler and linker flags to the command line and then invoke the appropriate underlying compiler, such as the GNU or Intel compilers, to actually perform the compile and/or link.
Chapter 7 MVAPICH MPI Compiling MPI Programs Step 3 Select the language and compiler of your choice from the selection of compiler wrappers available in Table 7-2. Table 7-2 Selecting Language and Compiler Wrappers Language Step 4 Compiler GNU Intel PGI C mpicc mpicc.i not applicable C++ mpiCC mpiCC.i not applicable Fortran 77 mpif77 mpif77.i mpif77.p Fortran 90 not applicable mpif90.i mpif90.
Chapter 7 MVAPICH MPI Compiling MPI Programs Cisco SFS InfiniBand Host Drivers User Guide for Linux 7-14 OL-12309-01
C H A P T E R 8 HCA Utilities and Diagnostics This chapter describes the HCA utilities and diagnostics and includes the following sections: • Introduction, page 8-1 • hca_self_test Utility, page 8-1 • tvflash Utility, page 8-3 • Diagnostics, page 8-5 Introduction The sections in this chapter discuss HCA utilities and diagnostics. These features address basic usability and provide starting points for troubleshooting.
Chapter 8 HCA Utilities and Diagnostics hca_self_test Utility Host Driver Initialization ............. PASS Number of HCA Ports Active ............. 2 Port State of Port #0 on HCA #0 ........ UP 4X Port State of Port #1 on HCA #0 ........ UP 4X Error Counter Check on HCA #0 .......... PASS Kernel Syslog Check .................... PASS Node GUID ..............................
Chapter 8 HCA Utilities and Diagnostics tvflash Utility tvflash Utility This section describes the tvflash utility and includes the following topics: Note • Viewing Card Type and Firmware Version, page 8-3 • Upgrading Firmware, page 8-4 The firmware upgrade is handled automatically by the installation script. You should not have to upgrade the firmware manually. For more information about the installation script, see Chapter 2, “Installing Host Drivers.
Chapter 8 HCA Utilities and Diagnostics tvflash Utility Upgrading Firmware To upgrade firmware on your host, perform the following steps: Note Upon installation of the host drivers, the firmware is automatically updated, if required. However, if you have outdated firmware on a previously installed HCA, you can upgrade the firmware manually. Step 1 Log in to your host, and flash the updated firmware binary to your local device. The firmware images are at /usr/local/topspin/share.
Chapter 8 HCA Utilities and Diagnostics Diagnostics Diagnostics This section includes diagnostics information. A few diagnostic programs are included with the Linux IB host drivers. The vstat utility prints IB information.
Chapter 8 HCA Utilities and Diagnostics Diagnostics Cisco SFS InfiniBand Host Drivers User Guide for Linux 8-6 OL-12309-01
A P P E N D I X A Acronyms and Abbreviations Table A-1 defines the acronyms and abbreviations that are used in this guide.
Appendix A Table A-1 Acronyms and Abbreviations List of Acronyms and Abbreviations (continued) Acronym Expansion SSH Secure Shell Protocol TCP Transmission Control Protocol uDAPL User Direct Access Programming Library ULP upper-level protocol WWNN world-wide node name WWPN world-wide port name Cisco SFS InfiniBand Host Drivers User Guide for Linux A-2 OL-12309-01
INDEX create subinterface A architecture, HCA supported audience 1-2 D vii authenticity messages 3-3 7-5 distributed memory environment 7-1 document audience B Bandwidth test default MPI 3-6, 5-7 vii conventions viii organization vii related ix 7-7 Netperf using 5-6 E 3-6 Element Manager 4-2 environment variables C edit manually card type, view CLI 8-3 7-7 set system-wide 4-2 users’ shell 7-6 7-6 command-line interface. See CLI.
Index Gateway, Fibre Channel InfiniHost 4-1 2-2 Globally Unique Identifier. See GUID. Initiator/Target/LUNs. See ITLs. global policy restrictions install, host drivers GNU compiler 4-2, 4-4 Intel compiler 7-1 graphical user interface. See GUI. GUI 7-1 IPoIB configure 4-2 GUID 2-2 3-2, 5-1 description 4-2 1-3 functionality 3-5 IP over InfiniBand. See IPoIB.
Index MPI portmask Bandwidth test description portmask policy 7-7 compile programs tutorial 7-8 R 1-4, 7-1 RDMA MPI implementation single 4-1 performance 7-5 7-5 MVAPICH MPI 7-3 7-9 7-12 multiple 7-1 public/private key pair Intel Benchmarks test MVAPICH 4-4 programming languages 7-12 1-4, 7-1 Latency test 4-4 1-4, 7-1 6-2 performance test 6-2 RDMA thru_client.x 6-2 Red Hat Package Manager. See RPM. related documentation ix remote direct memory access. See RDMA.
Index log with Element Manager 5-3 match 4-8 view 5-3 storage area network. See SAN. card type stream sockets networking firmware version 5-1 8-3 8-3 subinterface create 3-3 description remove W 3-2 3-4 worldwide node names. See WWNNs. worldwide port names. See WWPN. T WWNN 4-2, 4-3, 4-5 WWPN 4-3, 4-5 test Bandwidth 3-6 Bandwidth, default Bandwidth, MPI 3-6, 5-7 7-7 Bandwidth, with SDP IMB 5-6 7-9 Intel MPI Benchmarks. See IMB.