D-Link iSCSI IP SAN storage 10GbE iSCSI to SATA II / SAS RAID IP SAN storage DSN-6410 & DSN-6420 User Manual Version 1.
Preface Copyright Copyright@2011, D-Link Corporation. All rights reserved. No part of this manual may be reproduced or transmitted without written permission from D-Link corporation. Trademarks All products and trade names used in this manual are trademarks or registered trademarks of their respective companies. About this manual This manual is the introduction of D-Link DSN-64x0 IP SAN storage and it aims to help users know the operations of the disk array system easily.
Table of Contents Chapter 1 1.1 1.2 1.3 1.4 Features .........................................................................................6 1.1.1 1.2.1 1.2.2 1.2.3 Highlights .................................................................................................................6 RAID concepts ...............................................................................7 Terminology ...........................................................................................................
.5 4.6 4.4.1 4.4.2 4.4.3 4.4.4 4.4.5 4.4.6 4.5.1 4.5.2 4.5.3 4.5.4 Physical disk...........................................................................................................54 RAID group ............................................................................................................57 Virtual disk .............................................................................................................60 Snapshot ..............................................................
6.2 Event notifications .....................................................................125 Appendix A. B. C. 133 Certification list ..........................................................................133 Microsoft iSCSI initiator..............................................................136 From Single controller to Dual Controller …………………….
Chapter 1 Overview 1.1 Features D-LINK DSN-6000 series IP SAN storage provides non-stop service with a high degree of fault tolerance by using D-LINK RAID technology and advanced array management features. DSN-6410/6420 IP SAN storage connects to the host system by iSCSI interface. It can be configured to numerous RAID level. The IP SAN storage provides reliable data protection for servers by using RAID 6. The RAID 6 allows two HDD failures without any impact on the existing data.
D-LINK DSN-6410/6420 feature highlights Host Interface 4 x 10GbE iSCSI ports (DSN-6420) Drive Interface 12 x SAS or SATA II RAID Controllers Dual-active RAID controllers (DSN-6420) Scalability SAS JBOD expansion port Green Auto disk spin-down 2 x 10GbE iSCSI ports (DSN-6410) Single controller, but can be upgradable to dual (DSN-6410) Advanced cooling RAID Level RAID 0, 1, 0+1, 3, 5, 6, 10, 30, 50, 60 and JBOD N-way mirror Compatibility Support multiple OSes, applications, 10GbE NIC, 10GbE i
RAID is the abbreviation of “Redundant Array of Independent Disks”. The basic idea of RAID is to combine multiple drives together to form one large logical drive. This RAID drive obtains performance, capacity and reliability than a single drive. The operating system detects the RAID drive as a single storage device. 1.2.1 Terminology The document uses the following terms: Part 1: Common RAID Redundant Array of Independent Disks.
in cache and actual writing to non-volatile media occurs at a later time. It speeds up system write performance but needs to bear the risk where data may be inconsistent between data cache and the physical disks in one short time interval. RO Set the volume to be Read-Only. DS Dedicated Spare disks. The spare disks are only used by one specific RG. Others could not use these dedicated spare disks for any rebuilding purpose. GS Global Spare disks. GS is shared for rebuilding purpose.
MTU Maximum Transmission Unit. CHAP Challenge Handshake Authentication Protocol. An optional security mechanism to control access to an iSCSI storage system over the iSCSI data ports. iSNS Internet Storage Name Service. Part 3: Dual controller SBB 1.2.2 Storage Bridge Bay.
four hard drives. RAID 10 Striping over the member RAID 1 volumes. RAID 10 needs at least four hard drives. RAID 30 Striping over the member RAID 3 volumes. RAID 30 needs at least six hard drives. RAID 50 Striping over the member RAID 5 volumes. RAID 50 needs at least six hard drives. RAID 60 Striping over the member RAID 6 volumes. RAID 60 needs at least eight hard drives. JBOD The abbreviation of “Just a Bunch Of Disks”. JBOD needs at least one hard drive. 1.2.
1.3 iSCSI concepts iSCSI (Internet SCSI) is a protocol which encapsulates SCSI (Small Computer System Interface) commands and data in TCP/IP packets for linking storage devices with servers over common IP infrastructures. iSCSI provides high performance SANs over standard IP networks like LAN, WAN or the Internet. IP SANs are true SANs (Storage Area Networks) which allow several servers to attach to an infinite number of storage volumes by using iSCSI over TCP/IP networks.
Hardware iSCSI HBA(s) provide its own initiator tool. Please refer to the vendors’ HBA user manual. Microsoft, Linux, Solaris and Mac provide iSCSI initiator driver. Please contact DLINK for the latest certification list. Below are the available links: 1. Link to download the Microsoft iSCSI software initiator: http://www.microsoft.com/downloads/details.aspx?FamilyID=12cb3c1a-15d6-4585b385-befd1319f825&DisplayLang=en 2. In current Linux distributions, OS built-in iSCSI initiators are usually available.
The management port can be transferred smoothly to the other controller with the same IP address Online firmware upgrade, no system down time (only for DSN-6420) Multiple target iSCSI nodes per controller support Each LUN can be attached to one of 32 nodes from each controller Front-end 2 x 10GbE iSCSI ports with high availability/load balancing/fail-over support per controller Microsoft MPIO, MC/S, Trunking, LACP, and etc. SBB Compliant 6. 7. 8. 9. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12.
5. 6. 7. Instant volume configuration restoration Smart faulty sector relocation Hot pluggable battery backup module support 1. 2. 3. 4. 5. 6. 7. Enclosure monitoring S.E.S. inband management UPS management via dedicated serial port Fan speed monitors Redundant power supply monitors Voltage monitors Thermal sensors for both RAID controller and enclosure Status monitors for D-LINK SAS JBODs 1.
Windows Linux Solaris Mac Drive support 1. 2. 3. 4. 5. 6. 7. 8. SAS SATA II (optional) SCSI-3 compliant Multiple IO transaction processing Tagged command queuing Disk auto spin-down support S.M.A.R.T. for SATA II drives SAS JBODs expansion Power and Environment AC input: 100-240V ~ 7A-4A 500W with PFC (Auto Switching) DC output: 3.
(EN55022 / EN55024) UL statement FCC statement This device has been shown to be in compliance with and was tested in accordance with the measurement procedures specified in the Standards and Specifications listed below and as indicated in the measurement report number: xxxxxxxx-F Technical Standard: FCC Part 15 Class A (Verification) IC ICES-003 CE statement This device has been shown to be in compliance with and was tested in accordance with the measurement procedures specified in the Standards and Speci
The ITE is not intended to be installed and used in a home, school or public area accessible to the general population, and the thumbscrews should be tightened with a tool after both initial installation and subsequent access to the panel. Warning: Remove all power supply cords before service This equipment intended for installation in restricted access location.
Chapter 2 Installation 2.1 Package contents The package contains the following items: 1. 2. 3. 4. 5. 6. 7. 8. DSN-6410/6420 IP SAN storage (x1) HDD trays (x12) Power cords (x4) RS-232 cables (x2), one is for console, the other is for UPS. CD (x1) Rail kit (x1 set) Keys, screws for drives and rail kit (x1 packet) SFP and 5 Meter cable 2.2 Before installation Before starting, prepare the following items. 1. 2. 3. 4. 5. 6. A host with a Gigabit Ethernet NIC or iSCSI HBA.
The drives can be installed into any slot in the enclosure. Slot numbering will be reflected in web UI. Tips It is advisable to install at least one drive in slots 1 ~ 4. System event logs are saved to drives in these slots; If no drives are fitted the event logs will be lost in the event of a system reboot. 2.3.2 Front LED lights There are three LED lights on the left frame bar. Figure 2.3.2.1 LED lights description: Power LED: Green Power on. Off Power off.
2.3.3 Install drives Note : Skip this section if you purchased a solution populated with drives.
Figure 2.3.3.3 HDD tray description: 2.3.4 HDD power LED: Green HDD is inserted and good. Off No HDD. HDD access LED: Blue blinking HDD is accessing. Off No HDD. HDD tray handhold. Latch for tray kit removal. Rear view Figure 2.3.4.
Controller 2. (only on DSN-6420) Controller 1. Power supply unit (PSU1). Fan module (FAN1 / FAN2). Power supply unit (PSU2). Fan module (FAN3 / FAN4).
Figure 2.3.4.3 (DSN-6410 SFP+) Connector, LED and button description: 10GbE ports (x2). Link LED: Orange Asserted when a 1G link is established and maintained. Blue Asserted when a 10G link is establish and maintained. Access LED: Yellow Asserted when the link is established and packets are being transmitted along with any receive activity. LED (from right to left) Controller Health LED: Green Controller status normal or in the booting.
BBM Status Button: When the system power is off, press the BBM status button, if the BBM LED is Green, then the BBM still has power to keep data on the cache. If not, then the BBM power is ran out and cannot keep the data on the cache anymore. Management port. Console port. RS 232 port for UPS. SAS JBOD expansion port. 2.4 Install battery backup module To install the IP SAN storage with a battery backup module, please follow the procedure. Figure 2.4.1 1. 2. 3. 4. 5.
2.5 Deployment Please refer to the following topology and have all the connections ready. Figure 2.5.1 (DSN-6420) Figure 2.5.2 (DSN-6410) 1. Setup the hardware connection before power on servers. Connect console cable, management port cable, and iSCSI data port cables in advance.
2. 3. 4. In addition, installing an iSNS server is recommended for dual controller system. Power on DSN-6420/6410 and DSN-6020 (optional) first, and then power on hosts and iSNS server. Host server is suggested to logon the target twice (both controller 1 and controller 2), and then MPIO should be setup automatically. (only for DSN-6420) Tips iSNS server is recommended for dual controller system.
Figure 2.5.4 1. 2. Using RS-232 cable for console (back color, phone jack to DB9 female) to connect from controller to management PC directly. Using RS-232 cable for UPS (gray color, phone jack to DB9 male) to connect from controller to APC Smart UPS serial cable (DB9 female side), and then connect the serial cable to APC Smart UPS. Caution It may not work when connecting the RS-232 cable for UPS (gray color, phone jack to DB9 male) to APC Smart UPS directly.
Chapter 3 Quick setup 3.1 Management interfaces There are three management methods to manage D-LINK IP SAN storage, describe in the following: 3.1.1 Serial console Use console cable (NULL modem cable) to connect from console port of D-LINK IP SAN storage to RS 232 port of management PC. Please refer to figure 2.3.1. The console settings are on the following: Baud rate: 115200, 8 data bit, no parity, 1 stop bit, and no flow control. Terminal type: vt100 Login name: admin Default password: 123456 3.1.
3.1.3 Web UI D-LINK IP SAN storage supports graphic user interface (GUI) to operate. Be sure to connect the LAN cable. The default IP setting is DHCP; open the browser and enter: http://192.168.0.32 And then it will pop up a dialog for authentication. User name: admin Default password: 123456 Figure 3.1.4.1 After login, choose the functions which lists on the left side of window to make any configuration. Figure 3.1.4.2 There are seven indicators and three icons at the top-right corner. Figure 3.1.
Indicator description: RAID light: Green RAID works well. Red RAID fails. Temperature light: Green Temperature is normal. Red Temperature is abnormal. Voltage light: Green voltage is normal. Red voltage is abnormal. UPS light: Green UPS works well. Red UPS fails. Fan light: Green Fan works well. Red Fan fails. Power light: Green Power works well. Red Power fails.
Mute alarm beeper. Tips If the status indicators in Internet Explorer (IE) are displayed in gray, but not in blinking red, please enable “Internet Options” “Advanced” “Play animations in webpages” options in IE. The default value is enabled, but some applications will disable it. 3.2 How to use the system quickly The following methods will describe the quick guide to use this IP SAN storage. 3.2.1 Quick installation Please make sure that there are some free drives installed in this system.
Figure 3.2.1.2 Step2: Confirm the management port IP address and DNS, and then click “Next”. Figure 3.2.1.3 Step 3: Set up the data port IP and click “Next”.
Figure 3.2.1.4 Step 4: Set up the RAID level and volume size and click “Next”. Figure 3.2.1.5 Step 5: Check all items, and click “Finish”.
Figure 3.2.1.6 Step 6: Done. 3.2.2 Volume creation wizard “Volume create wizard” has a smarter policy. When the system is inserted with some HDDs. “Volume create wizard” lists all possibilities and sizes in different RAID levels, it will use all available HDDs for RAID level depends on which user chooses. When system has different sizes of HDDs, e.g., 8*200G and 8*80G, it lists all possibilities and combination in different RAID level and different sizes.
Figure 3.2.2.1 Step 2: Please select the combination of the RG capacity, or “Use default algorithm” for maximum RG capacity. After RG size is chosen, click “Next”. Figure 3.2.2.
Step 3: Decide VD size. User can enter a number less or equal to the default number. Then click “Next”. Figure 3.2.2.3 Step 4: Confirmation page. Click “Finish” if all setups are correct. Then a VD will be created. Step 5: Done. The system is available now. Figure 3.2.2.4 (Figure 3.2.2.4: A virtual disk of RAID 0 is created and is named by system itself.
Chapter 4 Configuration 4.1 Web UI management interface hierarchy The below table is the hierarchy of web GUI.
Maintenance System information Event log Upgrade Firmware synchronization Reset to factory default Import and export Reboot and shutdown System information Download / Mute / Clear Browse the firmware to upgrade Synchronize the slave controller’s firmware version with the master’s Sure to reset to factory default? Import/Export / Import file Reboot / Shutdown Quick installation Step 1 / Step 2 / Step 3 / Step 4 / Confirm Volume creation wizard Step 1 / Step 2 / Step 3 / Confirm 4.
Figure 4.2.1.1 Check “Change date and time” to set up the current date, time, and time zone before using or synchronize time from NTP (Network Time Protocol) server. Click “Confirm” in System indication to turn on the system indication LED. Click again to turn off. 4.2.2 Network setting “Network setting” is for changing IP address for remote administration usage. There are 3 options, DHCP (Get IP address from DHCP server), BOOTP (Get IP address from BOOTP server) and static IP.
Figure 4.2.2.1 4.2.3 Login setting “Login setting” can set single admin, auto logout time and admin / user password. The single admin is to prevent multiple users access the same system in the same time. 1. 2. Auto logout: The options are (1) Disabled; (2) 5 minutes; (3) 30 minutes; (4) 1 hour. The system will log out automatically when user is inactive for a period of time. Login lock: Disabled or Enabled.
Figure 4.2.3.1 Check “Change admin password” or “Change user password” to change admin or user password. The maximum length of password is 12 characters. 4.2.4 Mail setting “Mail setting” can enter 3 mail addresses for receiving the event notification. Some mail servers would check “Mail-from address” and need authentication for anti-spam. Please fill the necessary fields and click “Send test mail” to test whether email functions are available.
Figure 4.2.4.1 4.2.5 Notification setting “Notification setting” can set up SNMP trap for alerting via SNMP, pop-up message via Windows messenger (not MSN), alert via syslog protocol, and event log filter for web UI and LCM notifications.
Figure 4.2.5.1 “SNMP” allows up to 3 SNMP trap addresses. Default community setting is “public”. User can choose the event log levels and default setting enables ERROR and WARNING event log in SNMP. There are many SNMP tools. The following web sites are for your reference: SNMPc: http://www.snmpc.com/ Net-SNMP: http://net-snmp.sourceforge.net/ If necessary, click “Download” to get MIB file and import to SNMP.
Most UNIX systems build in syslog daemon. “Event log filter” setting can enable event log display on “Pop up events” and “LCM”. 4.3 iSCSI configuration “iSCSI configuration” is designed for setting up the “Entity Property”, “NIC”, “Node”, “Session”, and “CHAP account”. Figure 4.3.1 4.3.1 NIC “NIC” can change IP addresses of iSCSI data ports. DSN-6410/6420 has two 10GbE ports on each controller to transmit data.
Figure 4.3.1.2 Default gateway: Default gateway can be changed by checking the gray button of LAN port, click “Become default gateway”. There can be only one default gateway. MTU / Jumbo frame: MTU (Maximum Transmission Unit) size can be enabled by checking the gray button of LAN port, click “Enable jumbo frame”. Maximum jumbo frame size is 3900 bytes. Caution The MTU size of switching hub and HBA on host must be enabled. Otherwise, the LAN connection can not work properly.
LACP packets to the peer. The advantages of LACP are (1) increases the bandwidth. (2) failover when link status fails on a port. Trunking / LACP setting can be changed by clicking the button “Aggregation”. Figure 4.3.1.3 (Figure 4.3.1.3: There are 2 iSCSI data ports on each controller, select at least two NICs for link aggregation.) Figure 4.3.1.4 For example, LAN1 and LAN2 are set as Trunking mode. To remove Trunking / LACP setting, check the gray button of LAN port, click “Delete link aggregation”.
Figure 4.3.1.5 (Figure 4.3.1.5 shows a user can ping host from the target to make sure the data port connection is well.) 4.3.2 Entity property “Entity property” can view the entity name of the system, and setup “iSNS IP” for iSNS (Internet Storage Name Service). iSNS protocol allows automated discovery, management and configuration of iSCSI devices on a TCP/IP network. Using iSNS, it needs to install an iSNS server in SAN.
Figure 4.3.3.1 CHAP: CHAP is the abbreviation of Challenge Handshake Authentication Protocol. CHAP is a strong authentication method used in point-to-point for user login. It’s a type of authentication in which the authentication server sends the client a key to be used for encrypting the username and password. CHAP enables the username and password to transmit in an encrypted form for protection. To use CHAP authentication, please follow the procedures. 1. 2. 3.
Figure 4.3.3.3 5. Go to “/ iSCSI configuration / CHAP account” page to create CHAP account. Please refer to next section for more detail. Check the gray button of “OP.” column, click “User”. Select CHAP user(s) which will be used. It’s a multi option; it can be one or more. If choosing none, CHAP can not work. 6. 7. Figure 4.3.3.4 8. 9. Click “OK”. In “Authenticate” of “OP” page, select “None” to disable CHAP.
Rename alias: User can create an alias to one device node. 1. 2. 3. 4. 5. Check the gray button of “OP.” column next to one device node. Select “Rename alias”. Create an alias for that device node. Click “OK” to confirm. An alias appears at the end of that device node. Figure 4.3.3.6 Figure 4.3.3.7 Tips After setting CHAP, the initiator in host should be set with the same CHAP account. Otherwise, user cannot login. 4.3.
8. DataSeginOrder(Data Sequence in Order) 9. DataPDUInOrder(Data PDU in Order) 10. Detail of Authentication status and Source IP: port number. Figure 4.3.4.1 (Figure 4.3.4.1: iSCSI Session.) Check the gray button of session number, click “List connection”. It can list all connection(s) of the session. Figure 4.3.4.2 (Figure 4.3.4.2: iSCSI Connection.) 4.3.5 CHAP account “CHAP account” can manage a CHAP account for authentication. DSN-6420/6410 can create multiple CHAP accounts.
Figure 4.3.5.1 3. Click “OK”. Figure 4.3.5.2 4. Click “Delete” to delete CHAP account. 4.4 Volume configuration “Volume configuration” is designed for setting up the volume configuration which includes “Physical disk”, “RAID group”, “Virtual disk”, “Snapshot”, “Logical unit”, and “Replication”. Figure 4.4.
4.4.1 Physical disk “Physical disk” can view the status of hard drives in the system. The followings are operational steps: 1. 2. Check the gray button next to the number of slot, it will show the functions which can be executed. Active function can be selected, and inactive functions show up in gray color and cannot be selected. For example, set PD slot number 4 to dedicated spare disk. Step 1: Check to the gray button of PD 4, select “Set Dedicated spare”, it will link to next page. Figure 4.4.1.
Figure 4.4.1.3 (Figure 4.4.1.3: Physical disks in slot 1,2,3 are created for a RG named “RG-R5”. Slot 4 is set as dedicated spare disk of the RG named “RG-R5”. The others are free disks.) Step 4: The unit of size can be changed from (GB) to (MB). It will display the capacity of hard drive in MB. Figure 4.4.1.4 PD column description: Slot The position of a hard drive. The button next to the number of slot shows the functions which can be executed. Size (GB) (MB) Capacity of hard drive.
Usage “Failed” the hard drive is failed. “Error Alert” S.M.A.R.T. error alert. “Read Errors” the hard drive has unrecoverable read errors. The usage of hard drive: “RAID disk” This hard drive has been set to RAID group. “Free disk” This hard drive is free for use. “Dedicated spare” This hard drive has been set as dedicated spare of a RG. “Global spare” This hard drive has been set as global spare of all RGs. Vendor Hard drive vendor.
Set Dedicated spares Set a hard drive to dedicated spare of the selected RG. Upgrade Upgrade hard drive firmware. Disk Scrub Scrub the hard drive. Turn on/off the Turn on the indication LED of the hard drive. Click again to turn indication LED off. More information 4.4.2 Show hard drive detail information. RAID group “RAID group” can view the status of each RAID group, create, and modify RAID groups. The following is an example to create a RG.
Step 2: Confirm page. Click “OK” if all setups are correct. Figure 4.4.2.2 (Figure 4.4.2.2: There is a RAID 0 with 4 physical disks, named “RG-R0”. The second RAID group is a RAID 5 with 3 physical disks, named “RG-R5”.) Step 3: Done. View “RAID group” page. RG column description: The button includes the functions which can be executed. Name RAID group name. Total (GB) (MB) Total capacity of this RAID group. The unit can be displayed in GB or MB. Free (GB) (MB) Free capacity of this RAID group.
Health The health of RAID group: “Good” the RAID group is good. “Failed” the RAID group fails. “Degraded” the RAID group is not healthy and not completed. The reason could be lack of disk(s) or have failed disk RAID The RAID level of the RAID group. Current owner The owner of the RAID group. The default owner is controller 1. Preferred owner The preferred owner of the RAID group. The default owner is controller 1. RG operation description: Create Create a RAID group.
property Write cache: “Enabled” Enable disk write cache. (Default) “Disabled” Disable disk write cache. Standby: “Disabled” Disable auto spin-down. (Default) “30 sec / 1 min / 5 min / 30 min” Enable hard drive auto spin-down to save power when no access after certain period of time. Read ahead: “Enabled” Enable disk read ahead. (Default) “Disabled” Disable disk read ahead. Command queuing: More information 4.4.3 “Enabled” Enable disk command queue.
Figure 4.4.3.1 Caution If shutdown or reboot the system when creating VD, the erase process will stop. Step 2: Confirm page. Click “OK” if all setups are correct. Figure 4.4.3.2 (Figure 4.4.3.2: Create a VD named “VD-01”, from “RG-R0”. The second VD is named “VD-02”, it’s initializing.) Step 3: Done. View “Virtual disk” page.
VD column description: The button includes the functions which can be executed. Name Virtual disk name. Size (GB) (MB) Total capacity of the virtual disk. The unit can be displayed in GB or MB. Write The right of virtual disk: Priority Bg rate “WT” Write Through. “WB” Write Back. “RO” Read Only. The priority of virtual disk: “HI” HIgh priority. “MD” MiDdle priority. “LO” LOw priority.
Clone The target name of virtual disk. Schedule The clone schedule of virtual disk: Health The health of virtual disk: “Optimal” the virtual disk is working well and there is no failed disk in the RG. “Degraded” At least one disk from the RG of the Virtual disk is failed or plugged out. “Failed” the RAID group disk of the VD has single or multiple failed disks than its RAID level can recover from data loss.
/ … / 100. Delete Delete the virtual disk. Set property Change the VD name, right, priority, bg rate and read ahead. Right: “WT” Write Through. “WB” Write Back. (Default) “RO” Read Only. Priority: “HI” HIgh priority. (Default) “MD” MiDdle priority. “LO” LOw priority. Bg rate: “4 / 3 / 2 / 1 / 0” Default value is 4. The higher number the background priority of a VD is, the more background I/O will be scheduled to execute.
4.4.4 Stop clone Stop clone function. Schedule clone Set clone function by schedule. Set snapshot space Set snapshot space for taking snapshot. Please refer to next chapter for more detail. Cleanup snapshot Clean all snapshots of a VD and release the snapshot space. Take snapshot Take a snapshot on the virtual disk. Auto snapshot Set auto snapshot on the virtual disk. List snapshot List all snapshots of the virtual disk. More information Show virtual disk detail information.
Figure 4.4.4.2 (Figure 4.4.4.2: “VD-01” snapshot space has been created, snapshot space is 15GB, and used 1GB for saving snapshot index.) Step 3: Take a snapshot. In “/ Volume configuration / Snapshot”, click “Take snapshot”. It will link to next page. Enter a snapshot name. Figure 4.4.4.3 Step 4: Expose the snapshot VD. Check to the gray button next to the Snapshot VD number; click “Expose”. Enter a capacity for snapshot VD. If size is zero, the exposed snapshot VD will be read only.
Step 5: Attach a LUN to a snapshot VD. Please refer to the next section for attaching a LUN. Step 6: Done. Snapshot VD can be used. Snapshot column description: The button includes the functions which can be executed. Name Snapshot VD name. Used (GB) (MB) The amount of snapshot space that has been used. The unit can be displayed in GB or MB. Status The status of snapshot: Health “N/A” The snapshot is normal. “Replicated” The snapshot is for clone or replication usage.
4.4.5 Delete Delete the snapshot VD. Attach Attach a LUN. Detach Detach a LUN. List LUN List attached LUN(s). Logical unit “Logical unit” can view, create, and modify the status of attached logical unit number(s) of each VD. User can attach LUN by clicking the “Attach”. “Host” must enter with an iSCSI node name for access control, or fill-in wildcard “*”, which means every host can access the volume. Choose LUN number and permission, and then click “OK”. Figure 4.4.5.1 Figure 4.4.5.2 (Figure 4.
LUN operation description: Attach Attach a logical unit number to a virtual disk. Detach Detach a logical unit number from a virtual disk. The matching rules of access control are followed from the LUN’ created time; the earlier created LUN is prior to the matching rules. For example: there are 2 LUN rules for the same VD, one is “*”, LUN 0; and the other is “iqn.host1”, LUN 1. The host “iqn.host2” can login successfully because it matches the rule 1. Wildcard “*” and “?” are allowed in this field.
Figure 4.4.6.1 1. 2. 3. 4. 5. Select “/ Volume configuration / RAID group”. Click “Create“. Input a RG Name, choose a RAID level from the list, click “Select PD“ to choose the RAID physical disks, then click “OK“. Check the setting. Click “OK“ if all setups are correct. Done. A RG has been created. Figure 4.4.6.2 (Figure 4.4.6.2: Creating a RAID 5 with 3 physical disks, named “RG-R5”.) Step 2: Create VD (Virtual Disk). To create a data user volume, please follow the procedures.
Figure 4.4.6.3 1. 2. 3. 4. 5. Select “/ Volume configuration / Virtual disk”. Click “Create”. Input a VD name, choose a RG Name and enter a size for this VD; decide the stripe height, block size, read / write mode, bg rate, and set priority, finally click “OK”. Done. A VD has been created. Follow the above steps to create another VD. Figure 4.4.6.4 (Figure 4.4.6.4: Creating VDs named “VD-R5-1” and “VD-R5-2” from RAID group “RG-R5”, the size of “VD-R5-1” is 50GB, and the size of “VD-R5-2” is 64GB.
Figure 4.4.6.5 1. 2. 3. Select a VD. Input “Host” IQN, which is an iSCSI node name for access control, or fill-in wildcard “*”, which means every host can access to this volume. Choose LUN and permission, and then click “OK”. Done. Figure 4.4.6.6 Tips The matching rules of access control are from the LUNs’ created time, the earlier created LUN is prior to the matching rules. Step 4: Set a global spare disk. To set a global spare disk, please follow the procedures. 1. 2. 3.
Figure 4.4.6.7 (Figure 4.4.6.7: Slot 4 is set as a global spare disk.) Step 5: Done. Delete VDs, RG, please follow the below steps. Step 6: Detach a LUN from the VD. In “/ Volume configuration / Logical unit”, Figure 4.4.6.8 1. 2. 3. Check the gray button next to the LUN; click “Detach”. There will pop up a confirmation page. Choose “OK”. Done. Step 7: Delete a VD (Virtual Disk). To delete the virtual disk, please follow the procedures: 1. 2. 3. Select “/ Volume configuration / Virtual disk”.
To delete a RAID group, please follow the procedures: 1. 2. 3. 4. 5. Select “/ Volume configuration / RAID group”. Select a RG which all its VD are deleted, otherwise the this RG cannot be deleted. Check the gray button next to the RG number click “Delete”. There will pop up a confirmation page, click “OK”. Done. The RG has been deleted. Tips The action of deleting one RG will succeed only when all of the related VD(s) are deleted in this RG. Otherwise, user cannot delete this RG.
4.5.1 Hardware monitor “Hardware monitor” can view the information of current voltages and temperatures. Figure 4.5.1.
If “Auto shutdown” is checked, the system will shutdown automatically when voltage or temperature is out of the normal range. For better data protection, please check “Auto Shutdown”. For better protection and avoiding single short period of high temperature triggering auto shutdown, the system use multiple condition judgments to trigger auto shutdown, below are the details of when the Auto shutdown will be triggered. 1. 2. 3. There are several sensors placed on systems for temperature checking.
Figure 4.5.2.2 (Figure 4.5.2.2: With Smart-UPS.) UPS column description: UPS Type Select UPS Type. Choose Smart-UPS for APC, None for other vendors or no UPS. Shutdown Battery Level (%) When below the setting level, system will shutdown. Setting level to “0” will disable UPS. Shutdown Delay (s) If power failure occurs, and system power can not recover, the system will shutdown. Setting delay to “0” will disable the function.
Battery Level (%) 4.5.3 Current power percentage of battery level. SES SES represents SCSI Enclosure Services, one of the enclosure management standards. “SES configuration” can enable or disable the management of SES. Figure 4.5.3.1 (Figure 4.5.1.1: Enable SES in LUN 0, and can be accessed from every host) The SES client software is available at the following web site: SANtools: http://www.santools.com/ 4.5.4 Hard drive S.M.A.R.T. S.M.A.R.T.
Figure 4.5.4.1 (SAS drives & SATA drives) 4.
Status description: Normal Degraded Lockdown Single 4.6.2 Dual controllers are in normal stage. One controller fails or has been plugged out. The firmware of two controllers is different or the size of memory of two controllers is different. Single controller mode. Event log “Event log” can view the event messages. Check the checkbox of INFO, WARNING, and ERROR to choose the level of event log display.
The event log is displayed in reverse order which means the latest event log is on the first / top page. The event logs are actually saved in the first four hard drives; each hard drive has one copy of event log. For one system, there are four copies of event logs to make sure users can check event log any time when there are failed disks.
4.6.3 Upgrade “Upgrade” can upgrade controller firmware, JBOD firmware, change operation mode, and activate Replication license. Figure 4.6.3.1 Please prepare new controller firmware file named “xxxx.bin” in local hard drive, then click “Browse” to select the file. Click “Confirm”, it will pop up a warning message, click “OK” to start to upgrade firmware. Figure 4.6.3.2 When upgrading, there is a progress bar running.
master ones no matter what the firmware version of slave controller is newer or older than master. In normal status, the firmware versions in controller 1 and 2 are the same as below figure. Figure 4.6.4.1 4.6.5 Reset to factory default “Reset to factory default” allows user to reset IP SAN storage to factory default setting. Figure 4.6.5.1 Reset to default value, the password is: 123456, and IP address to default 192.168.0.32. 4.6.
1. 2. Import: Import all system configurations excluding volume configuration. Export: Export all configurations to a file. Caution “Import” will import all system configurations excluding configuration; the current configurations will be replaced. 4.6.7 volume Reboot and shutdown “Reboot and shutdown” can “Reboot” and “Shutdown” the system. Before power off, it’s better to execute “Shutdown” to flush the data from cache to physical disks. The step is necessary for data protection. Figure 4.6.7.1 4.
For security reason, please use “Logout” to exit the web UI. To re-login the system, please enter username and password again. 4.7.3 Mute Click “Mute” to stop the alarm when error occurs.
Chapter 5 Advanced operations 5.1 Volume rebuild If one physical disk of the RG which is set as protected RAID level (e.g.: RAID 3, RAID 5, or RAID 6) is FAILED or has been unplugged / removed, then the status of RG is changed to degraded mode, the system will search/detect spare disk to rebuild the degraded RG to a complete one. It will detect dedicated spare disk as rebuild disk first, then global spare disk. D-LINK IP SAN storages support Auto-Rebuild. Take RAID 6 for example: 1.
Rebuild operation description: RAID 0 Disk striping. No protection for data. RG fails if any hard drive fails or unplugs. RAID 1 Disk mirroring over 2 disks. RAID 1 allows one hard drive fails or unplugging. Need one new hard drive to insert to the system and rebuild to be completed. N-way mirror Extension to RAID 1 level. It has N copies of the disk. N-way mirror allows N-1 hard drives failure or unplugging. RAID 3 Striping with parity on the dedicated disk.
5.2 RG migration To migrate the RAID level, please follow below procedures. 1. 2. 3. Select “/ Volume configuration / RAID group”. Check the gray button next to the RG number; click “Migrate”. Change the RAID level by clicking the down arrow to “RAID 5”. There will be a pupup which indicates that HDD is not enough to support the new setting of RAID level, click “Select PD” to increase hard drives, then click “OK“ to go back to setup page.
5.3 VD extension To extend VD size, please follow the procedures. 1. 2. 3. Select “/ Volume configuration / Virtual disk”. Check the gray button next to the VD number; click “Extend”. Change the size. The size must be larger than the original, and then click “OK” to start extension. Figure 5.3.1 4. Extension starts. If VD needs initialization, it will display an “Initiating” in “Status” and complete percentage of initialization in “R%”. Figure 5.3.
any unfortunate reason it might be (e.g. virus attack, data corruption, human errors and so on). Snap VD is allocated within the same RG in which the snapshot is taken, we suggest to reserve 20% of RG size or more for snapshot space. Please refer to the following figure for snapshot concept. Figure 5.4.1 5.4.1 Create snapshot volume To take a snapshot of the data, please follow the procedures. 1. 2. 3. 4. 5. 6. Select “/ Volume configuration / Virtual disk”.
Figure 5.4.1.1 7. 8. 9. Check the gray button next to the Snapshot VD number; click “Expose”. Enter a capacity for snapshot VD. If size is zero, the exposed snapshot VD is read only. Otherwise, the exposed snapshot VD can be read / written, and the size is the maximum capacity for writing. Attach a LUN to the snapshot VD. Please refer to the previous chapter for attaching a LUN. Done. It can be used as a disk. Figure 5.4.1.2 (Figure 5.4.1.2: This is the snapshot list of “VD-01”. There are two snapshots.
Figure 5.4.2.1 (Figure 5.4.2.1: It will take snapshots every month, and keep the last 32 snapshot copies.) Tips Daily snapshot will be taken at every 00:00. Weekly snapshot will be taken every Sunday 00:00. Monthly snapshot will be taken every first day of month 00:00. 5.4.3 Rollback The data in snapshot VD can rollback to original VD. Please follow the procedures. 1. 2. 3. Select “/ Volume configuration / Snapshot”.
5.4.4 Snapshot constraint D-LINK snapshot function applies Copy-on-Write technique on UDV/VD and provides a quick and efficient backup methodology. When taking a snapshot, it does not copy any data at first time until a request of data modification comes in. The snapshot copies the original data to snapshot space and then overwrites the original data with new changes. With this technique, snapshot only copies the changed data instead of copying whole data. It will save a lot of disk space.
On Linux and UNIX platform, a command named sync can be used to make the operating system flush data from write caching into disk. For Windows platform, Microsoft also provides a tool – sync, which can do exactly the same thing as the sync command in Linux/UNIX. It will tell the OS to flush the data on demand. For more detail about sync tool, please refer to: http://technet.microsoft.com/en-us/sysinternals/bb897438.
When a snapshot has been rollbacked, the other snapshots which are earlier than it will also be removed. But the rest snapshots will be kept after rollback. If a snapshot has been deleted, the other snapshots which are earlier than it will also be deleted. The space occupied by these snapshots will be released after deleting. 5.5 Disk roaming Physical disks can be re-sequenced in the same system or move all physical disks in the same RAID group from system-1 to system-2. This is called disk roaming.
Figure 5.6.1 2. Create two virtual disks (VD) “SourceVD_R5” and “TargetVD_R6”. The raid type of backup target needs to be set as “BACKUP”. Figure 5.6.2 3. Here are the objects, a Source VD and a Target VD. Before starting clone process, it needs to deploy the VD Clone rule first. Click “Configuration”. Figure 5.6.3 4. There are three clone configurations, describe on the following.
Figure 5.6.4 Snapshot space: Figure 5.6.5 This setting is the ratio of source VD and snapshot space. The default ratio is 2 to 1. It means when the clone process is starting, the system will automatically use the free RG space to create a snapshot space which capacity is double the source VD. Threshold: (The setting will be effective after enabling schedule clone) Figure 5.6.6 The threshold setting will monitor the usage amount of snapshot space.
Restart the task an hour later if failed: (The setting will be effective after enabling schedule clone) Figure 5.6.7 When running out of snapshot space, the VD clone process will be stopped because there is no more available snapshot space. If this option has been checked, system will clear the snapshots of clone in order to release snapshot space automatically, and the VD clone will restart the task after an hour. This task will start a full copy.
Figure 5.6.9 7. Now, the clone target “TargetVD_R6” has been set. Figure 5.6.10 8. Click “Start clone”, the clone process will start. Figure 5.6.11 9. The default setting will create a snapshot space automatically which the capacity is double size of the VD space. Before starting clone, system will initiate the snapshot space.
Figure 5.6.12 10. After initiating the snapshot space, it will start cloning. Figure 5.6.13 11. Click “Schedule clone” to set up the clone by schedule. Figure 5.6.14 12. There are “Set Clone schedule” and “Clear Clone schedule” in this page. Please remember that “Threshold” and “Restart the task an hour later if failed” options in VD configuration will take effect after clone schedule has been set.
Figure 5.6.15 Run out of snapshot space while VD clone While the clone is processing, the increment data of this VD is over the snapshot space. The clone will complete, but the clone snapshot will fail. Next time, when trying to start clone, it will get a warning message “This is not enough of snapshot space for the operation”. At this time, the user needs to clean up the snapshot space in order to operate the clone process.
Figure 5.6.
5.7 SAS JBOD expansion 5.7.1 Connecting JBOD D-LINK controller suports SAS JBOD expansion to connect extra SAS dual JBOD controller. When connecting to a dual JBOD which can be detected, it will be displayed in “Show PD for:” of “/ Volume configuration / Physical disk”. For example, Local, JBOD 1 (DLINK DSN-6020), JBOD 2 (D-LINK DSN-6020), …etc. Local means disks in local controller, and so on. The hard drives in JBOD can be used as local disks. Figure 5.7.1.1 (Figure 5.7.1.1: Display all PDs in JBOD 1.
Figure 5.7.1.2 Figure 5.7.1.3 “/ Enclosure management / S.M.A.R.T.” can display S.M.A.R.T. information of all PDs, including Local and all SAS JBODs. Figure 5.7.1.4 (Figure 5.7.1.4: Disk S.M.A.R.T. information of JBOD 1, although S.M.A.R.T. supports SATA disk only.
SAS JBOD expansion has some constraints as described in the followings: 1 2 3 4 User could create RAID group among multiple chassis, max number of disks in a single RAID group is 32. Global spare disk can support all RAID groups which located in the different chassis. When support SATA drives for the redundant JBOD model, the bridge board is required. The multiplexer board does not apply to this model.
5.8 MPIO and MC/S These features come from iSCSi initiator. They can be setup from iSCSI initiator to establish redundant paths for sending I/O from the initiator to the target. 1. MPIO: In Microsoft Windows server base system, Microsoft MPIO driver allows initiators to login multiple sessions to the same target and aggregate the duplicate devices into a single device. Each session to the target can be established using different NICs, network infrastructure and target ports.
Figure 5.8.2 Difference: MC/S is implemented on iSCSI level, while MPIO is implemented on the higher level. Hence, all MPIO infrastructures are shared among all SCSI transports, including Fiber Channel, SAS, etc. MPIO is the most common usage across all OS vendors. The primary difference between these two is which level the redundancy is maintained. MPIO creates multiple iSCSI sessions with the target storage. Load balance and failover occurs between the multiple sessions.
5.9 Trunking and LACP Link aggregation is the technique of taking several distinct Ethernet links to let them appear as a single link. It has a larger bandwidth and provides the fault tolerance ability. Beside the advantage of wide bandwidth, the I/O traffic remains operating until all physical links fail. If any link is restored, it will be added to the link group automatically. D-LINK implements link aggregation as LACP and Trunking. 1. LACP (IEEE 802.
Figure 5.9.2 Caution Before using trunking or LACP, he gigabit switch must support trunking or LACP and enabled. Otherwise, host can not connect the link with storage device. 5.10 Dual controllers (only for DSN-6420) 5.10.1 Perform I/O Please refer to the following topology and have all the connections ready. To perform I/O on dual controllers, server/host should setup MPIO. MPIO policy will keep I/O running and prevent fail connection with single controller failure.
Figure 5.10.1.1 5.10.2 Ownership When creating RG, it will be assigned with a prefered owner, the default owner is controller 1. To change the RG ownership, please follow the procedures. 1 2 3 Select “/ Volume configuration / RAID group”. Check the gray button next to the RG name; click “Set preferred owner”. The ownership of the RG will be switched to the other controller. Figure 5.10.2.
Figure 5.10.2.2 (Figure 5.10.2.2: The RG ownership is changed to the other controller.) 5.10.3 Controller status There are four statuses described on the following. It can be found in “/ System maintenance / System information”. 1. Normal: Dual controller mode. Both of controllers are functional. 2. Degraded: Dual controller mode. When one controller fails or has been plugged out, the system will turn to degraded.
5.11 Replication Replication function will help users to replicate data easily through LAN or WAN from one IP SAN storage to another. The procedures of Replication are on the following: 1. Copy all data from source VD to target VD at the beginning (full copy). 2. Use Snapshot technology to perform the incremental copy afterwards. Please be fully aware that the incremental copy needs to use snapshot to compare the data difference. Therefore, the enough snapshot space for VD clone is very important.
3. If you want the replication port to be on special VLAN section, you may assign VLAN ID to the replication port. The setting will automatically duplicate to the other controller. Create backup virtual disk on the target IP SAN storage 1. Before creating the replication job on the source IP SAN storage, user has to create a virtual disk on the target IP SAN storage and set the type of the virtual disk as “BACKUP”. Figure 5.11.3 2.
Figure 5.11.4 Create replication job on the source IP SAN storage 1. If the license key is activated on the IP SAN storage correctly, a new Replication tab will be added on the Web UI. Click “Create” to create a new replication job. Figure 5.11.5 2. Select the source virtual disk which will be replicated to the target IP SAN storage and click “Next”. Figure 5.11.
Figure 5.11.7 4. The Replication uses standard iSCSI protocol for data replication. User has to log on the iSCSI node to create the iSCSI connection for the data transmission. Enter the CHAP information if necessary and select the target node to log no. Click “Next” to continue. Figure 5.11.8 5. Choose the backup virtual disk and click “Next”.
Figure 5.11.9 6. A new replication job is created and listed on the Replication page. Figure 5.11.10 Run the replication job 1. Click the “OP.” button on the replication job to open operation menu. Click “Start” to run the replication job. Figure 5.11.11 2. Click “Start” again to confirm the execution of the replication job.
Figure 5.11.12 3. User can monitor the replication job from the “Status” information and the progress is expressed by percentage. Figure 5.11.13 Create multi-path on the replication job 1. Click the “Create multi-path” in the operation menu of the replication job. Figure 5.11.14 2. Enter the IP of iSCSI port on controller 2 of the target IP SAN storage.
Figure 5.11.15 3. Select the iSCSI node to log on and click “Next”. Figure 5.11.16 4. Choose the same target virtual disk and click “Next”.
Figure 5.11.17 5. A new target will be added in this replication job as a redundancy path. Figure 5.11.18 Configure the replication job to run by schedule 1. Click “Schedule” in the operation menu of the replication job. Figure 5.11.
2. The replication job can be scheduled to run by hour, by day, by week or by month. The execution time can be configurable per user’s need. If the scheduled time of execution is arrived but the pervious replication job is stilling going, the scheduled execution will be ignored once. Figure 5.11.20 Configure the snapshot space The Replication uses Snapshot technique of D-LINK, to help user to replicate the data without stop accessing to the source virtual disk.
Figure 5.11.21 There are three settings in the Replication configuration menu, Figure 5.11.22 “Snapshot space” specifies the ratio of snapshot space allocated to the source virtual disk automatically when the snapshot space is not configured in advance. The default ratio is 2 to 1. It means when the replication job is creating, the IP SAN storage will automatically use the free space of RAID group to create a snapshot space which size is double of the source virtual disk.
5.12 VLAN VLAN (Virtual Local Area Network) is a logical grouping mechanism implemented on switch device using software rather than a hardware solution. VLANs are collections of switching ports that comprise a single broadcast domain. It allows network traffic to flow more efficiently within these logical subgroups. Please consult your network switch user manual for VLAN setting instructions. Most of the work is done at the switch part.
Figure 5.12.2 4. VLAN ID 66 for LAN2 is set properly. Figure 5.12.3 Assign VLAN ID to LAG(Trunking or LACP) 1. After creating LAG, press “OP” button next to the LAG, and select “Set VLAN ID”. Figure 5.12.4 2. Put in the VLAN ID and click ok. VLAN ID of LAG 0 is properly set.
Figure 5.12.5 3. If iSCSI ports are assigned with VLAN ID before creating aggregation takes place, aggregation will remove VLAN ID. You need to repeat step 1 and step 2 to set VLAN ID for the aggregation group. Assign VLAN ID to replication port Please consult figure 5.11.3 of 5.11 Replication section for details. Always make sure correct VLAN IDs are assigned to the correct network ports (iSCSI, switch, and host NIC) to ensure valid connections.
Chapter 6 Troubleshooting 6.1 System buzzer The system buzzer features are listed below: 1. 2. The system buzzer alarms 1 second when system boots up successfully. The system buzzer alarms continuously when there is error occurred. The alarm will be stopped after error resolved or be muted. The alarm will be muted automatically when the error is resolved. E.g., when RAID 5 is degraded and alarm rings immediately, user changes / adds one physical disk for rebuilding.
ERROR ERROR ERROR ERROR ERROR ERROR ERROR INFO INFO INFO INFO SATA PRD mem fail SATA revision id fail SATA set reg fail SATA init fail SATA diag fail Mode ID fail SATA chip count error SAS port reply error SAS unknown port reply error FC port reply error FC unknown port reply error Failed to init SATA PRD memory manager Failed to get SATA revision id Failed to set SATA register Core failed to initialize the SATA adapter SATA Adapter diagnostics failed SATA Mode ID failed SATA Chip count error SAS HBA p
RMS events Level Type INFO Console Login INFO Console Logout INFO INFO INFO WARNING Web Login Web Logout Log clear Send mail fail Description login from via Console UI logout from via Console UI login from via Web UI logout from via Web UI All event logs are cleared Failed to send event to .
ERROR INFO INFO INFO INFO WARNING WARNING WARNING ERROR ERROR ERROR WARNING ERROR VD move failed RG activated RG deactivated VD rewrite started VD rewrite finished VD rewrite failed RG degraded VD degraded RG failed VD failed VD IO fault Recoverable read error Recoverable write error Unrecoverable read error Unrecoverable write error Config read fail ERROR Config write fail ERROR INFO CV boot error adjust global CV boot global CV boot error create global PD dedicated spare INFO WARNING PD global sp
INFO VD erase started Snapshot events Level WARNING WARNING Type WARNING Snap mem Snap space overflow Snap threshold INFO INFO Snap delete Snap auto delete INFO INFO INFO Snap take Snap set space Snap rollback started Snap rollback finished Snap quota reached Snap clear space INFO WARNING INFO INFO INFO INFO Failed to allocate snapshot memory for VD . Failed to allocate snapshot space for VD . The snapshot space threshold of VD has been reached.
INFO PD upgrade started INFO WARNING INFO INFO Warning ERROR ERROR ERROR ERROR INFO WARNING WARNING PD upgrade finished PD upgrade failed PD freed PD inserted PD removed HDD read error HDD write error HDD error HDD IO timeout JBOD inserted JBOD removed SMART T.E.
System maintenance events Level INFO INFO INFO INFO INFO INFO INFO INFO INFO INFO INFO WARNING ERROR INFO Type System shutdown System reboot System console shutdown System web shutdown System button shutdown System LCM shutdown System console reboot System web reboot System LCM reboot FW upgrade start FW upgrade success FW upgrade failure IPC FW upgrade timeout Config imported System shutdown. System reboot.
Level Type INFO INFO WARNING INFO INFO INFO WARNING WARNING VD clone started VD clone finished VD clone failed VD clone aborted VD clone set VD clone reset Auto clone error Auto clone no snap Description VD starts cloning process. VD finished cloning process. The cloning in VD failed. The cloning in VD was aborted. The clone of VD has been designated. The clone of VD is no longer designated. Auto clone task: .
Appendix A. Certification list iSCSI Initiator (Software) OS Microsoft Windows Linux Software/Release Number Microsoft iSCSI Software Initiator Release v2.08 System Requirements: 1. Windows 2000 Server with SP4 2. Windows Server 2003 with SP2 3. Windows Server 2008 with SP2 The iSCSI Initiators are different for different Linux Kernels. 1. 2. For Red Hat Enterprise Linux 3 (Kernel 2.4), install linux-iscsi-3.6.3.tar For Red Hat Enterprise Linux 4 (Kernel 2.
D-Link Avago Finisar All D-Link Managed Gigabit Switches AFBR-703SDZ (10 Gb/s SFP transceiver, 850nm) FTLX8571D3BCV (10 Gb/s SFP transceiver, 850nm) 10GbE Switch Vendor Dell HP BLADE Model PowerConnect 8024F (24x SFP+ 10Gb with 4x Combo Ports of 10GBASE-T) ProCurve 2910al-24G J9145A (4x 10GbE J9149A CX4 Ports, 24x 10/100/1000 Ports) RackSwitch G8124 10G (24 x SFP+ 10Gbps Ports) Hard drive SAS drives are recommanded on dual controller system. For SATA drivers, multiplexer boards are required. SAS 3.
Vendor Hitachi Hitachi Hitachi Hitachi Hitachi Hitachi Hitachi Maxtor Maxtor Samsung Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Seagate Westem Westem Westem Westem Westem Westem Westem Westem Digital Digital Digital Digital Digital Digital Digital Digital Westem Digital Westem Digital Westem Digital Westem Digital Westem Digital Westem Digital Westem Digital Model Deskstar 7K250, HDS722580VLSA80, 80GB, 7200RPM, SATA, 8M Deskstar E7K500, HDS7250
Vendor Seagate Model Constellation, ST9500530NS, 500GB, 7200RPM, SATA 3.0Gb/s, 32M (F/W: SN02) B. Microsoft iSCSI initiator Here is the step by step to setup Microsoft iSCSI Initiator. Please visit Microsoft website for latest iSCSI initiator. This example is based on Microsoft Windows Server 2008 R2. 1. 2. Connect Run Microsoft iSCSI Initiator. Input IP address or DNS name of the target. And then click “Quick Connect”. Figure B.1 3. Click “Done”.
Figure B.2 4. It can connect to an iSCSI disk now. 5. 6. 7. Figure B.3 MPIO If running MPIO, please continue. Click “Discovery” tab to connect the second path. Click “Discover Portal”. Input IP address or DNS name of the target.
8. Figure B.4 Figure B.5 Figure B.6 Figure B.7 Click “OK”.
9. Click “Targets” tab, select the second path, and then click “Connect”. 10. Enable “Enable multi-path” checkbox. Then click “OK”. 11. Done, it can connect to an iSCSI disk with MPIO. MC/S 12. If running MC/S, please continue. 13. Select one target name, click “Properties…”. 14. Click “MCS…” to add additional connections. Figure B.8 Figure B.9 15. Click “Add…”. 16. Click “Advanced…”.
Figure B.10 Figure B.11 17. Select Initiator IP and Target portal IP, and then click “OK”. 18. Click “Connect”. 19. Click “OK”. Figure B.12 Figure B.13 20. Done.
Disconnect 21. Select the target name, click “Disconnect”, and then click “Yes”. Figure B.14 22. Done, the iSCSI device disconnect successfully.
C. From single controller to dual controllers This SOP applies to upgrading from DSN-6110 to DSN-6120 as well as from DSN-6410 to DSN-6420. Before you do this, please make sure that either DSN-6110 or DSN-6410 is properly installed according to the manuals, especially the HDD trays. If you are NOT using SAS hard drives, you need to use HDD trays with either multiplexer board or bridge board to install your HDDs in order to utilize the dual controller mode features.
Please follow the steps below to upgrade to dual controller mode. Step 1 Go to “Maintenance\System”. Copy the IP SAN storage serial number. Step 2 Go to “Maintenance\Upgrade” and paste the serial number into “Controller Mode” section. Select “Dual” as operation mode.
Step 3 Click “confirm”. The system will ask you to shutdown. Please shutdown IP SAN storage. Click Ok.
Go to “Maintenance\Reboot and shutdown”. Click “Shutdown” to shutdown the system. Click Ok.
Step 4 Power off DSN-6110 or DSN-6410. Insert the second controller to the IP SAN storage. And then power on the system. The IP SAN storage should now become in dual controller mode as either DSN-6120 or DSN-6420. You may go to “Maintenance\System information” to check out. The IP SAN storage is running in dual controller mode now.