Diamond Storage Array Installation, Operations, Maintenance Manual
Table Of Contents
- Preface
- 1.0 Diamond Storage Array Product Overview
- 2.0 Diamond Storage Array Technical Overview
- 3.0 Installation Instructions
- 3.2 Physical Set Up
- 4.0 Determining Drive and Sled Designations
- 5.0 Accessing the Array
- Command Line Interface
- ATTO ExpressNAV
- In-band SCSI over Fibre Channel
- RS-232 port
- Ethernet port
- SNMP
- I/O details
- Browser compatibility
- Opening an ExpressNAV session
- Navigating ExpressNAV
- Exhibit 5.4-1 Atypical page in the ATTO ExpressNAV configuration tool.
- Status
- Ethernet
- SNMP
- Serial Port
- Fibre Channel
- Storage Management
- RAID
- Clear Data
- Logical Units
- Rebuild
- Configuration
- Advanced
- Restart
- Help
- FirmwareRestart
- Help
- RestoreConfiguration
- SaveConfiguration
- SystemSN
- VerboseMode
- EthernetSpeed
- FTPPassword
- IPAddress
- IPDHCP
- IPGateway
- IPSubnetMask
- SNMPTrapAddress
- SNMPTraps
- SNMPUpdates
- TelnetPassword
- TelnetTimeout
- TelnetUsername
- FcConnMode
- FcDataRate
- FcFairArb
- FcFrameLength
- FcFullDuplex
- FcHard
- FcHardAddress
- FcNodeName
- FcPortInfo
- FcPortList
- FcPortName
- FcWWName
- SerialPortBaudRate
- SerialPortEcho
- SerialPortHandshake
- SerialPortStopBits
- AudibleAlarm
- DiamondModel
- DiamondName
- DriveCopyStatus
- DriveInfo
- FcNodeName
- FcPortList
- FcPortName
- Help
- IdentifyDiamond
- Info
- LUNInfo
- SerialNumber
- SledFaultLED
- SMARTData
- Temperature
- VirtualDriveInfo
- FcScsiBusyStatus
- FirmwareRestart
- MaxEnclTempAlrm
- MinEnclTempAlrm
- Temperature
- Zmodem
- ATADiskState
- AutoRebuild
- ClearDiskReservedAreaData
- DriveCopy
- DriveCopyHalt
- DriveCopyResume
- DriveCopyStatus
- DriveInfo
- DriveSledPower
- DriveWipe
- IdeTransferRate
- LUNInfo
- LUNState
- QuickRAID0
- QuickRAID1
- QuickRAID5
- QuickRAID10
- RAID5ClearData
- RAID5ClearDataStatus
- RAIDInterleave
- RAIDHaltRebuild
- RAIDManualRebuild
- RAIDRebuildState
- RAIDRebuildStatus
- RAIDResumeRebuild
- RebuildPriority
- ResolveLUNConflicts
- RestoreModePages
- SledFaultLED
- VirtualDriveInfo
- 6.0 Configuring Drives
- JBOD (Just a Bunch of Disks)
- RAID Level 0
- RAID Level 1
- RAID Level 10
- RAID Level 5
- Interleave
- Hot Spare sleds
- Enhancing performance
- Sled-based versus disk-based
- Exhibit 6.2-1 Sled-based QuickRAID0 stripe groups with LUN designations in a fully populated Array set up as QuickRAID0 6 sled. If sled 6 were to be withdrawn from the array, LUN 3 (grayed boxes) would be unavailable.
- Exhibit 6.2-2 Drive-based QuickRAID0 stripe groups with LUN designations in a fully populated Array set up as QuickRAID0 6 Drive. If sled 6 were to be withdrawn from the array, LUNs 2 and 5 would be unavailable.
- Exhibit 6.2-3 Configurations of a fully populated Diamond Storage Array in RAID Level 0.
- Exhibit 6.3-1 Drive sleds, LUNs and mirror partners in a RAID Level 1 configuration.
- Hot Spare sleds
- Configuring a fully-populated array
- Configuring a partially-populated array
- Removing RAID groups
- Hot Spare sleds
- 7.0 Hardware Maintenance
- 8.0 Copying Drives
- 9.0 Updating Firmware
- 10.0 System Monitoring and Reporting
- RS-232 monitoring port and CLI
- Ethernet monitoring port and CLI
- Power On Self Test (POST)
- Ready LED
- Audible alarm
- Thermal monitoring
- Power supply monitoring
- System fault LED and error codes
- Disk drive activity and disk fault LEDs
- Windows 2000 special instructions
- Error messages
- Specific situations and suggestions
- Default
- Factory Default
- Appendix A ATA Disk Technology
- Appendix B Information Commands Results
- Appendix C Product Safety
- Appendix D Specifications
- Appendix E Warranty
50
Configuring drives
RAID Level 10 is used in applications requiring
high performance and redundancy, combining the
attributes of RAID Levels 1 and 0.
The
QuickRAID10
command, accessed through
the Command Line Interface, allows a simple,
out-of-the-box setup of RAID Level 10 groups.
The array will operate in degraded mode if a drive
fails unless you have enabled Hot Spare sleds.
RAID Level 5
RAID Level 5 increases reliability while using
fewer disks than mirroring by employing parity
redundancy. Distributed parity on multiple drives
provides the redundancy to rebuild a failed drive
from the remaining good drives. Parity data is
added to the transmitted data at one end of the
transaction, then the parity data is checked at the
other end to make sure the transmission has not
had any errors.
In the array, transmitted data with the added parity
data is striped across disk drives. A hardware
XOR engine computes parity, thus alleviating
software processing during reads and writes.
The array will operate in degraded mode if a drive
fails unless you have enabled Hot Spare sleds.
Interleave
The interleave size sets the amount of data to be
written to each drive in a RAID group. This is a
tunable parameter which takes a single stream of
data and breaks it up to use multiple disks per I/O
interval.
The CLI command
RAIDInterleave
allows you
to change the size of the sector interleave between
RAID groups. The value will depend upon the
normal expected file transfer size. If the normal
file transfer size is large, the interleave value
should be large, and vice versa.
The value entered for the
RAIDInterleave
command refers to blocks of data: one block is
equivalent to 512 bytes of data.
Valid entries are 16, 32, 64, 128, 256 and SPAN.
SPAN, not available in RAID Level 5, indicates
that interleave size between the drives in the
group will be the minimum drive size of all
members in the group.
Hot Spare sleds
In most configurations, if a member of a virtual
device becomes degraded, you must swap out the
faulted sled as defined in
Hot Swap Operating
Instructions on page 71
. If you have not enabled
AutoRebuild
, you must also start a manual rebuild.
For four configurations, however, Hot Spare sleds
may be designated as replacements for faulted
sleds without intervention by you or a host.
Each configuration requires a certain number of
Hot Spare sleds. These sleds, once designated as
Hot Spares, are not available for other use.
The following configurations will support
optional Hot Spare sleds
RAID Level 1:
2 Hot Spare sleds
RAID Level 10:
1 group, 2 Hot Spare sleds
RAID Level 5:
1 group, 1 Hot Spare sled
RAID Level 5:
2 groups, 2 Hot Spare sleds
Enhancing performance
SpeedWrite, enabled by the CLI command
SpeedWrite
, improves the performance of
WRITE commands










