Installation guide

Page 33 of 59
c. Be sure to use slots with the same PCI version and bus speed for both
HBAs on each cluster server (there is a diagram of slots on the top inside
cover of the servers).
3. Initialize the CXx00 array – This involves accessing the storage array through a
serial connection or cross-over cable, and setting key parameters, such as the IP
addresses for the management interface for each storage processor. In addition,
any required array based software is installed or upgraded at this time.
4. Install host based software –
a. Qlogic/Emulex HBA drivers - In addition to installing the correct version
of the HBA driver, it is important to update the Qlogic or Emulex system
BIOS to the currently supported level.
b. Navisphere™ Agent – this host-based agent will be used to register
connected hosts with the storage array.
c. PowerPath™ – this software is used to configure failover and load
balancing between fibre channel connections on a given server.
5. Configure Fibre Channel switches – this step involves connecting to the switches
via a serial or web interface.
a. The IP address for the switch is entered. Initial configuration is
performed, in preparation for zoning.
b. Perform Zoning on Fibre Channel switches – This step is performed on the
network via a web interface. Zoning is the process of mapping servers to
the storage array and granting specific access right to servers.
6. Configure storage with Navisphere – these are the key steps for organizing and
presenting storage to the servers. Included are:
a. Create RAID groups from sets of disks
b. Subdivide the RAID groups into logical disk units called LUNs
c. Create a storage group that includes the servers and LUNs for the Oracle
RAC system. The components of this storage group will be allowed to
connect to each other.
7. Access storage through each server in the cluster – all servers should now have
the same view of the shared external storage. PowerPath software adds value by
managing multiple connections to the storage array per server for path failover
and load balancing.
Verifying that the SAN is Ready for Oracle
Once the RAID Groups, LUNs, and Storage Groups have been prepared on the Clariion
array; and zoning has been performed on the Fibre Channel switch; simply connecting
the fiber cable to the HBAs should allow the servers to see storage. One of the key tools
to view the storage configuration is the command:
# less /proc/partitions
This command allows you to scroll through a list of storage devices visible to the server
(hit “q” to exit). At first you will see a list of physical devices, in the form of /dev/sda,
where “a” is one or more letters. This list may be as many as four times as big as the
actual number of physical LUNs configured on the server. Each possible pathway to a
given physical disk through a separate HBA port and switch port counts as a separate
device. Ultimately, this view of disks would not work for connecting to Oracle without
further configuration (Oracle would suffer from “double vision” in its view of the disks).